processors. Bernabe et al. (2013) tested the parallel unmixing
processing chain on a multi-core system made up of two Intel
Xeon
CPUs
at 2.53
GHz
with 12 physical cores. Yang and Zhang
(2012) presented a parallel framework for remotely sensed
image fusion and gave experimental results on an ordinary PC
with four cores. These studies show that the parallel comput-
ing on multicore computers can increase the computing per-
formance for remotely sensed image to some extent. However,
because of the limited cores used in these above studies and
the relatively low scalability of the adopted multithreading
and OpenMP (an
API
for shared-memory parallel program-
ming) libraries, the achieved acceleration is about three times.
With the rising number of
CPU
cores available in a single com-
puter, the parallel experiments with more cores and a more
scalable parallel library are needed. With the parallel library,
such as message passing interface (
MPI
), the flexible and opti-
mal parallel strategies can be easily embodied in the course of
remote sensing image processing.
Even though users can make use of resources of remote
clusters such as Amazon™ computing cluster, remote sensing
images are usually stored in local devices and the transmis-
sion of a large amount of data is a tedious and time-consum-
ing task. Multi-core computers which are commonplace and
popular make parallel methods on this type of computer more
accessible. We define a multi-core computer as a computer
that contains single or multiple
CPUs
in one machine with
each
CPU
including multiple cores. To the best of the authors’
knowledge, there are no extensive parallel computing results
available for typical algorithms in remote sensing based map-
ping (i.e., geometric correction, image fusion, image mosaics,
and digital elevation model extractions) using multi-core
computers. The objective of this research is to quantitatively
identify what levels the processing performance of these algo-
rithms coupled with an optimized parallel mechanism can be
promoted to, and to discuss what factors determine parallel
performance on a multi-core computer.
This paper applies a straightforward and high parallelized
mechanism to investigate parallel performance of five catego-
ries of typical algorithms on two multi-core computers with
Windows™ and Linux operating systems, respectively. The
next section briefly describes the five categories of algorithms,
followed by a parallel method optimized for a multi-core
computer. Extensive tests of parallel computing on multi-core
computers and discussions are presented in the Parallel Per-
formance and Analysis section, followed by Conclusions.
The Tested Algorithms
Preprocessing
A filtering algorithm and the de-correlation stretch algorithm
were selected to verify the parallel performance of pre-pro-
cessing. A smoothing filter with a 5 × 5 size filter kernel in
which each element is 0.04 is used. De-correlation stretch is a
process that is used to stretch the color differences found in a
color image. It is described as a technique based on a prin-
cipal component transformation in which the transformed
variables are contrast-stretched and then retransformed to
the original coordinates in display (Gillespie
et al.
, 1986).
Campbell (1996) found that de-correlation stretch essentially
produces an alternative linear transformation of the spectral
bands to that produced by principal component analysis. In
this paper, we implement the technique following the method
proposed by Campbell.
Geometric Correction
The rational polynomial coefficients (RPCs) based model has
been extensively used for high-resolution satellite optical im-
agery (Fraser
et al.
, 2006; Tao and Hu, 2001) and SAR imagery
(Zhang G.
et al.
, 2010; Zhang L.
et al.
, 2011) as an alternative
sensor orientation. In this experiment, geometric correction
was applied using the RPCs based model on a RADARSAT-2
image. Moreover, the original
SLC
image was transformed to
the magnitude-phase version.
Image Fusion
The block-regression-based (
BR
) fusion algorithm (Zhang
J.X.
et al.
, 2010) was selected as an example. The algorithm
derives a synthetic block as a linear function of blocks of
multispectral bands that has the maximum correlation with
the relevant block of the panchromatic band for every block of
images. The maximum correlation with the block of the pan-
chromatic image results in the maximum enhancement of the
spatial details at the expense of the minimum spectral distor-
tion derived from the fusion operations. The block-regression
fusion algorithm can be described by Equation 1:
(
)
(
)
xs
xs
xs
c xs
pan
k i j
H
k i j
L
k i j
L
k k
k i j
L
i j
, ,
( , , )
, ,
, ,
,
(
)
(
)
(
)
( )
=
+
−
∑
k
k
k i j
L
c xs
∑
(
)
, ,
(1)
In Equation 1,
xs
l
(
k,i,j
)
and
xs
H
(
k,i,j
)
are the pixel values before
and after fusion, respectively,
pan
(i, j)
is the pixel value of the
panchromatic image,
k
is the band number,
(i, j)
represents
the pixel location, and
c
k
is the linear regression coefficient in
the multiple linear regression of the block region containing
the pixel located at
(i, j)
.
Image Mosaics
An image mosaic combines two or more images with overlap-
ping or connected areas for an overview of a large image scene
(Du
et al.
, 2008). The prerequisite of a mosaic is that the images
are georeferenced or registered. Generally, radiometric normal-
ization and blending processes can be employed for this pur-
pose. In this paper, the input images are already geo-referenced,
and the adopted method blends pixels from different input
bands into the average value in overlapping areas, whereas in
the areas that have only one image, the values are just copied.
DEM Extractions
We present a modified version of the matching algorithm
(Sohn
et al.
, 2005). A flowchart of the proposed
DEM
/
DSM
extraction algorithm is shown in Figure 1. First, the points of
interest extracted from the left image by the Förstner interest
operator (Förstner, 1986) are matched through cross-correlation
against points along the piecewise matching line (
PML
) in
the right image, which is determined by image-to-object and
object-to-image transformations (Sohn
et al.
, 2005). The maxi-
mum and minimum height values were calculated according
to the height offset and scale factor in the
RPC
file. Second,
the average height is determined according to those matched
points in a certain area. The value was used to decide the new
height range, which is required in the subsequent matching.
Third, the interest and grid points from the left image are again
matched against points along the
PML
in the right image with
the new height range. Last, the 3
D
points are generated using
RPC
-based space intersection (Grodecki
et al.
, 2004). The whole
process reveals that the computational cost is relatively large.
The Method of Parallel Processing
In the course of rectification, one can pursue either a direct
or an indirect approach to transform a digital image by an
analytical function (Novak, 1992). The
indirect method
takes
each pixel location of the result (e.g., orthophoto), determines
its position in the original image by certain
transformation,
and interpolates the gray value by a given resampling method
(Figure 2a); On the other hand, the
direct method
starts
from the pixel location in the original image, transforms its
374
May 2015
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING