PE&RS November 2019 Full - page 838

Mismatching Feature Point Elimination
Automatically and accurately eliminating mismatching fea-
ture points is important. Our elimination method is effective
when the mismatching points are few; thus, the majority of
error extraction feature points must be eliminated before the
registration. Figure 17 shows three types of error extraction.
Figure 17a shows the leakage extraction. Canny algorithm
is used for image edge extraction. These parameters are rela-
tively broad to avoid overextraction, causing some road lane
edges to not be extracted, but this type of error extraction is
known and does not involve the matching. Figure 17b and c
shows the error extraction (e.g., billboard, vehicle). The high-
est (or lowest) edge point was taken as the feature point. We
can obtain two edge points by searching from left to right and
from right to left. This point is eliminated if the distance is
larger than 50 pixels, as Figure 17b shows; otherwise, the aver-
age of these two points is taken as the feature point, as Figure
17c shows, which needs to be eliminated in the registration.
Figure 18a–e shows the influence of the vehicle shadow,
and Figure 18f–j shows the influence of the expansion joint
and window size on road lane feature point extraction in dif-
ferent panoramic images. These extraction errors were com-
pletely eliminated. The registration result of 31 panoramic
images proves that our elimination method is effective.
Registration Accuracy Evaluation
In the section “Stitching Error of Panoramic Image,” we
discussed the stitching errors of the panoramic image; these
are unequal in different locations, so the registration accuracy
evaluated by a few points is imprecise. Even so, the feature
point evaluation is the most commonly used at present (Cui
et al.
2017; Li
et al.
2018; Zhu
et al.
2018), and the evenly
distributed feature points can weaken this effect. In the above
experiments, the feature points are automatically extracted,
and the registration result proves that our registration method
is much better than the original registration method evaluated
by the same feature points. Figure 19 shows our registration
accuracy (
δ
) with different numbers of feature points (
n
) and
indicates that
δ
is independent of
n
.
The same data sets (nos. 4–8) were used by Zhu
et al.
(2018) (where they are named nos.
N
− 2, …,
N
+ 2), as shown
in Table 4. Zhu
et al.
(2018) calculated the registration accu-
racy of method I and the skyline method with 38 checkpoints,
and the average errors of these five panoramic images were
31.16 and 11.08 pixels, respectively. Then the automatically
extracted feature points were used to evaluate the accuracy of
our method, and the average errors of methods I and II were
45.13 and 5.88 pixels, respectively. Finally, the average error
of our method was 10.45 pixels with 38 checkpoints, which
is similar to the skyline method. However, our method has
higher applicability than that of Zhu
et al.
(2018). The skyline
Figure 18. Eliminating mismatching feature points.
Figure 19. The registration accuracy with a different number of feature points.
838
November 2019
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
775...,828,829,830,831,832,833,834,835,836,837 839,840,841,842,843,844,845,846,847,848,...854
Powered by FlippingBook