PE&RS January 2018 Full - page 36

accurately calibrated before field deployment, and linear as
well as radial and tangential correction were applied to all the
images to mitigate the lens distortion.
The
EOP
describe the orientation, three rotation angles
ω
,
φ
,
κ
, and position,
X
,
Y
,
Z
coordinates, of the camera with
respect to a reference system, and thus connecting the imag-
ing/camera frame to an object space frame. The estimation of
the EOPs can be accomplished many ways. Here the sen-
sor installation locations (C1 and C2) were known from
GPS
surveys, so mainly, the orientation angles had to be estimated
based on using
p
GCPs
. The single photo resection (
SPR
) prob-
lem is solved through a point-based approach, where at least
three non-collinear conjugate points, i.e., targeted control
points, are used in a least-squares adjustment based on the
well-known collinearity equations (Lichti
et al
., 2009). In
this study, the camera positions were also calculated by the
Perspective-Three-Point (
P3P
) approach that determines the
pose of the camera from three correspondences between 3D
target points and their 2D projections; the implementation by
Kneip
et al
. (2011) was used.
GCP Densification Using Stereo Imagery (sGCPs)
To scale the optical flow results, additional ground control
is needed to the previous
p
GCPs
, such as, ideally, an accurate
Digital Elevation Model (
DEM
). Thus, to generate a reason-
able
DEM
, it is necessary to have a representative sample of
3D points evenly distributed in the image frame. In lieu of a
surface model, sample points, sort of secondary
GCPs
, can be
used, which can be obtained by the stereo intersection using
the C1 and C2 images. During that process, lines of sight
from each camera to the same point on the ice were identi-
fied on nearly simultaneously acquired image pairs from both
cameras, and then approximate location of these points was
computed. Since the base/height (
B/H
) ratio is rather small,
multiple resections were performed, so the 3D positions
could be averaged to improve accuracy; note that the base was
805 m and the depth was 2,000+ m on average. In total, 43
fairly evenly distributed surface points were extracted with
an estimated accuracy of 0.2 m (Chang
et al
., 1992).
Correlation Analysis (CA)
The grey value consistency assumption is a fundamental to
optical flow estimation. Natural scenes observed by a static
sensor, in general, always experience changes in brightness
that may make the optical flow calculation challenging.
Therefore, using the image gradient instead of the intensity
value (Tistarelli, 1994) allows for small variations in the grey
value, making the estimation of the relationship between sur-
face motion and the image brightness changes more reliable.
This concept assumes that the observed brightness/intensity
gradient of any object is constant over time, and any change
in value at a point is due to the motion (Kearney
et al
., 1987).
In addition to the gradient constraint, there is another as-
sumption that nearby points in the image moves in a similar
way (Schalkoff, 1989). Consequently, noticeable changes in
lighting should be avoided to assure that changes of image
intensity between images are only caused by motion in the
object space (Klette, 2014). To assess the differences between
two images, the summation of color differences between
corresponding pixels over the image is performed. Figure 3a
shows the
RGB
intensity distributions for different images ac-
quired under varying conditions, including clear day without
snow cover, a cloudy day without snow cover, a clear day
with snow cover, and a cloudy day with snow cover. These
changes, mainly due to solar radiation and seasonal snow,
can significantly impact the optical flow computation. In our
study case, the daily lightness changes were analyzed and im-
ages that were markedly different were removed by the
CA
.
Initially, the first image pair is selected from the time-
lapse image series,
IM
n
(Master) and
IS
n+1
(Slave). It should be
emphasized that the
IM
was selected taking into account the
optimal contrast and lightness characteristics of the glacier
surface, and afterwards, the filter (
CA
) may select images with
similar conditions. Then, the histogram for each
RGB
band is
separately computed for both images. Next, the mean cor-
relation value for each of the three channels is analyzed. If
the mean value is equal or greater than 0.90, then the pair
remains selected. Otherwise, the computation starts again,
and a new correlation between the
IM
n
and the
IS
n+2
is calcu-
lated with the threshold of 0.90 iteratively reduced by 0.007
in each step. The process will stop when the mean is greater
than the descending correlation threshold, and the new pair
is selected. The value of 0.007 was chosen by experimen-
tally tests; for example, having a larger step will allow for a
lower correlation between pairs. In the worst case, the images
are separated by 40 days in time; see the 425 to 465 images
in Figure 3b. Clearly, the variation of the threshold to pass
depends on how dissimilar the light conditions in the
RGB
channels are in the image pairs. The process is repeated until
all the images have been processed;
IS
n+m
becomes
IM
n+m
,
where
m
defines which will be the next image master. In
natural environment, such as glaciers, the objects may change
substantially over longer time due to the glacier’s movement,
but in shorter time, such as a few weeks, the motion is likely
exhibiting a similar pattern. Therefore, losing a few consecu-
tive image pairs has limited impact on the motion estimation.
Obviously, there is a practical time limit beyond which it may
not be possible to identify corresponding objects. As a result
of filtering, the original time-lapse image sequence is reduced
to a subset of images where the changes in luminosity are
relatively small, and thus the selected pairs should be likely
processed successfully in the
LDOF
computation.
Figure 3b shows the intermediate and final results of the
CA
. The blue line describes the correlation value for the mean
value of the three
RGB
bands for the
n-1
image pairs. The black
line corresponds to the threshold of the iterative correlation
test that decreases until it coincides with the
RGB
correlation
value, and hence the test is passed. The dark points show
the 231 image pairs selected by the
CA
, i.e., 38 percent of the
images passed the test. The lowest threshold value for the 40
days apart case was 0.59. Clearly, this is the worst situation,
coinciding with a major snow cover and changing weather
conditions. The best case was with one day of difference with
a correlation value of 0.99.
Optical Flow (LDOF)
We use the
LDOF
Matlab implementation developed by Brox
et al
. (2004), Brox and Malik (2011) (
.
edu/~katef/
LDOF
.html
). Several other optical flow algorithms,
such as the Scale-Invariant Feature Transformation (
SIFT
)
based flow developed by Liu (2009) that establishes dense,
semantically meaningful correspondence between two images
across scenes by matching pixel-wise, or the Pyramidal imple-
mentation of the Lucas Kanade Feature Tracker algorithm
(Bouguet, 2001) were also tested. Based on the comparisons,
the
LDOF
produced the best motion estimation results for our
datasets. This algorithm implements a coarse-to-fine varia-
tional framework between two images
I
1
and
I
2
,
and computes
the displacement field by minimizing the functional energy
E(w)
using the following model (Equation 1). This method is
based on energy minimization that penalizes the deviation in
relation to the constraints imposed on the model.
E(w)=E
color
(w)+
γ
E
gradient
(w)+
α
E
smooth
(w)+
β
E
match
(w, w
1
)+E
desc
(w
1
) (1)
where:
α
,
β
and
γ
are tuning parameters which can be deter-
mined manually according to qualitative evidence based on
a large variety of images, or can be estimated automatically
from ground truth data. These three parameters were experi-
mentally determined by varying their ranges until optimal
36
January 2018
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
I...,26,27,28,29,30,31,32,33,34,35 37,38,39,40,41,42,43,44,45,46,...54
Powered by FlippingBook