October 2019 Layout Flipping Full - page 729

the reference image is divided into some regular grids
without overlap. The Harris intensity value H of each
pixel in each grid is calculated. The H values are arranged
in descending order, and the first n points are selected as
feature points.
3. After the feature point is detected in the reference image,
a corresponding search area (typically 20×20 pixels) of
the sensed image is predicted by the known geographic
coordinate information of the images. Then, the
SSSF
ncc
is
applied to extract the tie points via a template matching
method in this search region. Although the semiglobal
matching algorithm has been commonly used in a dense
match, this method may not be suitable for the method of
this study and is slightly time consuming (Hirschmuller
et al.
2007). A bidirectional matching technique, which
consists of two steps (positive and reverse matching
steps), is applied to image matching to acquire a robust
match result (Qu
et al.
2016). In the positive matching
step, for a point
p
extracted from the feature point sets of
the reference image, the corresponding feature point
q
can
always be found through the maximum similarity of
SSSF-
ncc
between the template window and the search region. In
the reverse matching step, the corresponding point of
q
in
the reference image is obtained through the same strategy.
When the two steps receive an identical match result, the
matched point pair (
p
,
q
) is regarded as tie points.
4. Several uncertainty factors should be considered in the
image matching process, for example, the occlusion and
shadow. Therefore, the above-extracted tie points are not
always accurate. The projective transformation model is
usually used for the tie point consistency check, because
this model can deal with conventional global transforma-
tion (such as rotation, scale, and translation) effectively.
5. The steps of tie points consistency check are as follows. An
iterative refining procedure is used to eliminate mismatched
tie points. A projective transformation model is first calcu-
lated using the least-squares method with all the tie points.
Then the residual errors and the root-mean-square error
(
RMSE
) of tie points are computed to evaluate matching
accuracy. The tie point with the largest residual error is it-
eratively removed. The aforementio
until
RMSE
is less than a given thres
6. After the mismatched tie points are
lation-based
PL
model is applied to
caused by terrain relief. For the images covering mountain
and hilly regions, the first, second order polynomial trans-
formation models are better than
PL
transform model for
prefitting the nonrigid deformations among such images.
However, the test data is located in the plains, without
obvious undulating terrain. Therefore, in this paper, the
PL
model is used to rectify the sensed images. This model
first constructs the triangulated irregular networks (Delau-
nay TINs) (Ye
et al.
2015) with all the tie points. Then, the
transformation parameters in each triangular region are cal-
culated by the coordinates of the triangle’s three vertices.
Finally, the image is rectified on each triangle region.
x
p
=
a
0
+
a
1
x
q
+
a
2
y
q
y
p
=
b
0
+
b
1
x
q
+
b
2
y
q
where (
x
p
,
y
p
) and (
x
q
,
y
q
) are the coordinates of the refer-
ence and remote sensing images, respectively.
Experiments and Evaluation
Experiments are conducted on five pairs of multisource re-
mote sensing images from optical,
SAR
, and map data to evalu-
ate the proposed method. The test sets consist of three parts:
high-resolution and medium-resolution images and image-to-
map sets. Each set consists of an image pair composed of a ref-
erence and a remote sensing image, which are obtained from
different sensors with significant nonlinear grayscale differ-
ences. The test data, implementation details, and experimen-
tal analysis are discussed in the following pages of this paper.
Description of Test Sets
Three categories of multisource remote sensing image pairs
are used to evaluate the effectiveness of
SSSF
ncc
. Table 1 pres-
ents the descriptions of test sets, and Figure 7 shows the test
data images. The details of each test set are as follows:
High-Resolution Set
In the reference data (optical image), the multispectral red-
light band data of China Resources No. 3 satellite image is
used. The spatial resolution is 6 m below the star point, and
the image acquisition time is dated June 2017. The remote
sensing data (
SAR
image) is based on the Japanese Advanced
Land Observing Satellite (
ALOS
) Earth observation satellite
Phased Array L-Band Synthetic Aperture Radar (
PALSAR
)
sensor spatial resolution. The rate is 6 m, and the image time
is dated June 2012. The coverage area of the optical-
SAR
1
is mainly near the Yunlong Lake in Xuzhou, China, which
is also mixed with some road, house, and other land infor-
mation. The area is located in the plains, without obvious
undulating terrain, and the local distortion is significant at
the riverbank and high buildings. The coverage area of the
optical-
SAR
2 is Xuzhou’s Yunlong District, with various
houses, buildings, and roads with richly shaped features.
Similar to the Yunlong Lake, no obvious undulating terrain
is found in the city. However, high buildings clearly lead to
local deformation. These two data sets include significant lo-
cal deformations affected by building displacement due to the
capture from different sensors. In addition, the two images
have a five-year time difference, and a few ground objects
have changed during this period. The rotation difference can
be observed between two images (see Figures 7a and 7b).
optical image) come from the Band3 red
ndsat5
TM
sensor of the U.S. Landsat
series. The spatial resolution is 30 m and the image acquisi-
tion time is dated January 2003. Data (
SAR
image) come from
the European Space Agency Earth observation satellite series
ENVIronment SATellite (
ENVISAT
) satellite Advanced Syn-
thetic Aperture Radar sensor. The image acquisition time is
dated March 2004, and the resolution is 20 m. The experi-
mental area of the optical-
SAR
3 is near the Yunlong Lake
in Xuzhou. The test area of the optical-
SAR
4 is the Xuzhou
Yunlong District. The images have a temporal difference of
almost 14 months. Figures 7c and 7d show that the terrain
shape structures of the images have lower quality than the
high-resolution set, and the local deformation caused by high
buildings and terrain changes is not as significant.
Map Set
The reference image is obtained from the Baidu map, whereas
the remote sensing image is obtained from Google Earth.
These images were acquired in December 2016, and their
resolution is 2 m. As shown in Figure 7e, the map data have
less terrain shape structure information, and the radiometric
properties and local distortions are evident between the two
images. Therefore, detecting the tie points between the two
data is challenging.
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
October 2019
729
699...,719,720,721,722,723,724,725,726,727,728 730,731,732,733,734,735,736,737,738,739,...778
Powered by FlippingBook