PE&RS March 2018 Full - page 161

Information on each collected tie-point such as tie-point
ID
,
coordinates, can also be displayed in a separate window (See
Figure 2b), so that a user can easily edit the incorrect tie-point
as well as monitor progress. To assist a user to select a tie-
point more efficiently, a range of basic image processing tools
are also included, and our in-house stereo matching algorithm,
i.e., Adaptive Least Squares Correlation (
ALSC
) (Gruen, 1985)
and Region growing (
GOTCHA
) (Otto and Chau, 1989) have
been integrated into the software to produce a denser disparity
map from the collected manual tie-points, if required.
Selection of Tie-Points
In this work, we define three types of tie-points and employ
slightly different selection procedures to prepare a sub-pixel
reference tie-points:
a)
Feature based
: Irregularly distributed tie-points.
b)
Regular grid
: Regularly distributed tie-points.
c)
Discontinuities
: Tie-points around depth discontinuities.
Type (a) (i.e., feature-based) tie-points are collected to
generate highly detectable reference tie-points from standard
feature matching algorithms. Since many detectable im-
age features are found around highly textured areas, we can
easily select feature-based tie-points from visual identifica-
tion. The selection procedure initially defines a number of
“interesting” points from the left image using generic feature
extraction algorithms, and then ask participants to identify
the corresponding right point by adjusting the offset of a
stereo cursor. Corresponding tie-points in the right image are,
therefore, defined at integer resolution initially. However, an
average is taken of a set of manual selections that result in
sub-pixel selection. Alternatively,
ALSC
can be applied to the
right tie-point to refine the pixel position.
Type (b) (i.e., regular grid) tie-points are proposed to collect
regularly distributed reference tie-points across the whole
image. This will improve the chance of getting reference tie-
points from small depth discontinuity or from less-textured
areas. Unlike the feature-based selection, it will be a bit more
challenging to pick a correct tie-point from visual identifica-
tion. Therefore, participants are asked to collect tie-point from
visual validation, i.e., an initial guess for a right tie-point is
given at the beginning. To provide good starting points to par-
ticipants, a dense disparity map is generated using an in-house
stereo processing pipeline and sampled at regular grid points.
These initial tie-points are then visually inspected, e.g., mov-
ing the stereo cursor around the grid points and check if there
is any abnormality, or adjusting the disparity offset of a stereo
cursor at the point to check whether the estimation appears to
be the best solution, and/or do both with 1.5 or 2 times scaled-
up images, which will increase the chance of getting correct
correspondences (Chan
et al
., 2003). Finally, collect the result-
ing tie-points that pass the validation test.
Type (c) tie-points (i.e., discontinuities) aims to collect
reference tie-points from the places that general automated
matchers may fail (so-called pathological cases). These areas
are normally the results from occlusions, insufficient texture,
and strong depth discontinuities, i.e., pixels whose neigh-
boring disparities differ by more than a threshold (refer to
the Middlebury stereo evaluation (Scharstein
et al
., 2001)).
Among these, we are particularly interested in matching per-
formance around depth discontinuity, since some algorithms
deliberately enforce the local smoothness around depth
discontinuities in order to densify a disparity map. We manu-
ally select two pairs of tie-points around this area, i.e. one tie-
point from background and another one from foreground and
evaluate how correctly an algorithm can handle the scene oc-
clusions (See Figure 3). The scene occlusion is a well-known
issue in classic stereo matching, therefore it might be inter-
esting to see if it is possible to design an automated pipeline
for populating type (c) tie-points (i.e., discontinuities) with
conventional feature detectors. However, without knowing
true foreground and background segmentation, we found this
would be difficult to make it fully automated.
To select type (c) tie-points, an expert manually chooses
a set of challenging tie-points around a typical problematic
area, and participants are asked to validate them. The vali-
dation process is quite similar to the regular grid selection,
except that this time no clues are given around tentative tie-
points.
Error Metrics
The next step is to estimate the error bounds according to
the statistics recorded in the three types of manual tie-point
selection process. Suppose that is a set of left tie-points from
type (
k
) dataset, i.e,
{ , , }
T
k k
M
k
=
t
t
0
, where
k
{
a,b,c
} and
M
is
the number of left tie-point defined in type (k). Similarly, we
can define a set of right tie-points corresponding to
t
i
k
from
manual selections as
S
i
k
i
k
Ni
k
=
{ , ,
}
s
s
0
, where
N
is the num-
ber of participants performing manual measurement.
Although it is not always true that some of the measure-
ments in
S
i
k
happen to be identical to ground truth, it is
highly likely that a true correspondence of
t
i
k
can be found
within a cluster of selected points. Thus, our scoring method
basically defines a local cluster of
S
i
k
based on the mean
m
k
i
and the standard deviation
σ
k
i
and evaluates final matching
score.
(a)
(b)
Figure 2. Example of a stereo anaglyph showing a stereo
cursor: (a) the offset of a stereo cursor is automatically
set according to the supplied disparity map; and (b)
triangulated 3D position of a corresponding pair is also
displayed when there is calibration data based on the use
of the
CAHVOR
calibration model employed by
NASA
for
MER
and
MSL
cameras (Di and Li, 2004).
Figure 3. Example of a pair of tie-points around object
boundary, e.g.,
t
4
and
t
5
are a pair of left tie-points collected
from background and foreground to evaluate rewarding score.
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
March 2018
161
111...,151,152,153,154,155,156,157,158,159,160 162,163,164,165,166,167,168,169,170
Powered by FlippingBook