PE&RS November 2019 Full - page 829

Semiautomatically Register MMS LiDAR Points
and Panoramic Image Sequence
Using Road Lamp and Lane
Ningning Zhu, Yonghong Jia, and Xia Huang
Abstract
We propose using the feature points of road lamp and lane
to register mobile mapping system (
MMS
)
LiDAR
points and
panoramic image sequence. Road lamp and lane are the com-
mon sobjects on roads; the spatial distributions are regular,
and thus our registration method has wide applicability and
high precision. First, the road lamp and lane were extracted
from the
LiDAR
points by horizontal grid and reflectance
intensity and then by optimizing the endpoints as the feature
points of road lamp and lane. Second, the feature points were
projected onto the panoramic image by initial parameters
and then by extracting corresponding feature points near
the projection location. Third, the direct linear transforma-
tion method was used to solve the registration model and
eliminate mismatching feature points. In the experiments,
we compare the accuracy of our registration method with
other registration methods by a sequence of panoramic im-
ages. The results show that our registration method is effec-
tive; the registration accuracy of our method is less than 10
pixels and averaged 5.84 pixels in all 31 panoramic images
(4000 × 8000 pixels), which is much less than that of the
56.24 pixels obtained by the original registration method.
Introduction
Mobile mapping systems (
MMS
) can obtain optical image
sequence (
2D
) and
LiDAR
points (
3D
) simultaneously and
have been extensively used for spatial data acquisition. The
panoramic camera in
MMS
consists of m
can capture large scenes and stitch pano
points reflect mainly
3D
spatial informat
has a rich texture, they are two types of data, and the geo-
metric reference frame is different. Registration of
MMS
LiDAR
points and panoramic image sequence is important for many
applications, such as texturing
3D
models (Abayowa
et al.
2015), building extraction (Yang
et al.
2013), and point cloud
classification (Barnea and Filin 2013; Yang and Dong 2013).
The registration methods of
LiDAR
points (mobile/airborne/
terrestrial laser scanning) and optical image (frame/fish-eye/
panoramic image) include
2D
-
2D
–based methods (Gong
et al.
2014; Parmehr
et al.
2014; Brown
et al.
2015; Lv and Ren 2015;
Miled
et al.
2016; Plötz and Roth 2017),
2D
-
3D
–based methods
(Zhang
et al.
2008; Lepetit
et al.
2009; Liu and Stamos 2012;
Shao
et al.
2017), and
3D
-
3D
–based methods (Kaminsky
et al.
2009; Corsini
et al.
2013; Zheng
et al.
2013). However, the
existing methods for the registration of
MMS
LiDAR
points and
panoramic image sequence rely mostly on
GPS/IMU
(original
registration method) (Yao
et al.
2017) or feature points (point-
based registration method). The original registration method
directly obtains the position and orientation parameters from
GPS/IMU
, but the accuracy is affected by dropouts and so on,
leading to unreliable registration results (Miled
et al.
2016).
It can be registered well by feature points, but automatically
extracting feature points from
LiDAR
points and panoramic im-
age sequence is difficult, and manual selection is unrealistic.
The primitive pairs (line, plane, vehicle, etc.) have also been
used to register, but it is a challenge for automatic extraction.
Wang
et al.
(2012) proposed an automatic mutual information
registration method for mobile
LiDAR
and panoramas; however,
only part of the panoramic image was used, and the registra-
tion was based on a conventional frame camera model. Cui
et
al.
(2017) extracted line features from
LiDAR
points automati-
cally and corresponding monoimages (1616 × 1232 pixels)
semiautomatically. The registration accuracy was 4.718 and
4.244 pixels in spherical and panoramic cameras, respectively;
however, this method requires manual intervention. Li
et al.
(2018) took parked vehicles as primitive pairs and extracted
vehicles from panoramic images (2048 × 4096 pixels) via Fast-
er RCNN. Then particle swarm optimization was used to refine
the translation parameters, and the registration error was less
than 3 pixels. However, this method relies on parked vehicles,
and the application scenarios are limited. Zhu
et al.
(2018)
proposed a skyline-based method for automatic registration of
LiDAR
points and panoramic/fish-eye image sequence. A brute-
force optimization method was used to match skyline pixels
and points. The registration error of panoramic images (4000 ×
8000 pixels) is approximately 10 pixels; however, skyline pix-
els could represent very distant objects, while skyline points
esponding area, and this may cause the
ixels and points.
he feature points of road lamp and lane
to register
MMS
LiDAR
points and panoramic image sequence.
The
MMS
LiDAR
scanner can provide high-density and high-
accuracy point clouds, which is conducive to the extraction
of road lamp and lane. Road lamps are cost-effectively moni-
tored, maintained, and managed for transportation agencies
(Zheng
et al.
(2017)). Road lamps have two characteristics:
(1) they are commonly installed near road curbs, and (2) they
stand vertically to the ground within a certain height range
(Li,
et al.
2016). Road lanes play a critical role in the genera-
tion of high-definition road maps and fully autonomous driv-
ing systems (Wu
et al.
2017) and have highly retro-reflective
materials painted on asphalt concrete pavement. The rela-
tively high-intensity value is a unique characteristic of road
lane extraction from
LiDAR
clouds (Yang
et al.
2012; Riveiro
et
al.
2015; Yan
et al.
2016; Li
et al.
2017). Therefore, road lamp
and lane are the common appendages of roads. These spatial
Ningning Zhu, Yonghong Jia, and Xia Huang are with the
School of Remote Sensing and Information Engineering,
Wuhan University, China, 129 Luoyu Road, Wuhan, Hubei
430079, China (
).
Photogrammetric Engineering & Remote Sensing
Vol. 85, No. 11, November 2019, pp. 829–840.
0099-1112/19/829–840
© 2019 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.85.11.829
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
November 2019
829
775...,819,820,821,822,823,824,825,826,827,828 830,831,832,833,834,835,836,837,838,839,...854
Powered by FlippingBook