for laser pointing or for the mounting of the system, the basic
concept remains the same. This or a similar model, including
modifications particular to a given system, is embedded in
the proprietary processing software that is generally used to
produce point clouds from raw lidar measurements. However,
users of the lidar data normally do not have insight into the
model and often they don’t have access to the raw data which
is also in proprietary formats (Habib, 2008).
Once the basic sensor model is known, it is desirable to
have an error budget for the individual parameters and to
propagate associated uncertainties to the ground points to pre-
dict the accuracy of the point cloud product. This idea of error
propagation in lidar is also well documented in the literature.
After describing the basic sensor model, Baltsavias (1999) went
on to describe the accuracy of 3
D
positioning and explore the
error contributions from the lidar system’s individual compo-
nents. Schenk (2001) provided a very detailed review of the
individual error sources in lidar datasets and investigated the
error bounds on those sources. Contributors identified included
range error, scan angle errors,
GPS
and
INS
mounting and align-
ment errors, and errors in the
GPS
and
IMU
systems. Schenk
went on to investigate the contribution that each of these errors
had on the ground coordinates of the final point cloud based
on some assumed errors for the individual components.
Habib (2008) also explored the error budget of lidar systems
based on the components of the direct georeferencing equa-
tion. In this work, Habib first explored the random error com-
ponents and used them to understand the inherent noise in the
dataset as well as the achievable accuracy of the dataset. He
then explored the systematic errors, focusing on the boresight-
ing offsets and angular biases. An important note that Habib
made is that raw lidar measurements are generally not pro-
vided to the end user. The lack of this raw data along with the
lack of rigorous adjustment of redundant measurements and
the associated covariance means that the end user lacks the
information necessary to predict the accuracy of their dataset.
Schaer
et al
. (2007) also investigated error propagation for
lidar systems. He addressed system component errors similar
to those addressed by Habib, Schenk, and others. However,
Schaer expanded on this by taking into consideration the
impacts that the interaction of the laser energy with local
terrain will have on the ranging accuracy. He described this
terrain interaction as an overlooked, but major contributor
to lidar error. Schaer combined the error contributions from
the propagated random errors with the error estimates from
terrain interaction to form a single value quality indicator, the
q-indicator
, for each lidar point. The
q-indicator
would be
carried with the dataset and used during subsequent pro-
cessing. For example, it would be used to weight the input
observations in a block adjustment, increase the robustness of
automated classification algorithms by considering the accu-
racy of points, and provide additional information to be used
when manually classifying points.
All of these referenced papers show that it is possible to
implement error propagation methods for lidar systems and
that the resulting uncertainty estimates can benefit the end
user; however, none of them provide a compact means to rep-
resent these uncertainties for use by downstream users. The
access to a sensor model and predicted uncertainties for the
lidar points provides a very powerful tool to the data proces-
sor and end user. One key application of such data that has
been explored extensively in the literature is data adjustment.
Data adjustment requires a sensor or mathematical model that
includes parameters to be updated during the adjustment.
It also requires weights (and therefore
a priori
uncertainty
estimates) for both the adjustable parameters and the input
observations. To perform a strip-wise adjustment of lidar data,
Vosselman and Maas (2001) used the following model:
X
Y
Z
R
R xR
x
y
z
e
e
e
stripto ref
e
et
x
y
z
=
+
(
)
+
+
X
Y
Z
stripcentre
stripcentre
stripcentre
e
et
withR
and R
=
−
−
−
=
−
1
1
1
0
κ φ
κ
ω
φ ω
κ
κ ω
φ ω
0
0
−
−
φ
·
(4)
where
R
strip to ref
and
X
strip centre
,
Y
strip centre
,
Z
strip centre
are transforma-
tion parameters between the reference coordinate system and
the error-free strip coordinate system. For adjustment, this
model utilized three positional offsets (
e
x
,
e
y
,
e
z
) and three an-
gular offsets (
ω
,
φ
,
κ
). However, it also identified the possibility
of angular drifts in the dataset (
R
et
) with respect to down-track
distance and thus allowed for angular drift parameters (
ω·
,
φ·
,
κ·
).
Filin (2003) applied his sensor model and error budget to
perform an adjustment by updating physical parameters of his
sensor model. Specifically, Filin solved for the mounting bias
(updates to the mounting angles associated with the
INS
) and a
range bias. His method involved constraining the lidar points
to the ground surface. This method helped to eliminate the
need to identify conjugate features between datasets and was
also less sensitive to dataset density.
Habib (2008) also built an adjustment model to solve for
transformation parameters necessary to bring overlapping
strips into alignment. In this case, he solved for a rigid body
transformation to go from the observed strip coordinates
to the adjusted strip coordinates. This transformation con-
sisted of three translation parameters (
X
T
,
Y
T
, and
Z
T
) and
three rotation parameters (
ω
,
ϕ
,
κ
). He used the intersection
of planar features to identify lines that were then used as
the conjugate match features between datasets. Here, Habib
introduced the concept of modifying the input covariance of
the line-feature points to account for the fact that points from
overlapping strips are not conjugate. So, the uncertainties of
the input points were being considered in this adjustment. A
least-squares process was used to determine the optimal set of
transformation parameters. These parameters were applied to
the data and conjugate features were compared in the post-ad-
justed datasets to estimate the residual noise in the datasets.
Habib
et al
. (2010) expanded on this concept and looked
into solving for updates to system parameters to improve sen-
sor calibration. Habib’s method solved for updates to the sys-
tem mounting position (
δ
Δ
X
,
δ
Δ
Y
,
δ
Δ
Z
) as well as updates to
the
IMU
mounting angles (
δ
Δ
ω
,
δ
Δ
ϕ
,
δ
Δ
κ
). Because the original
raw data and metadata were not available, Habib proposed a
“quasi-rigorous” adjustment. This required the trajectory of
the sensor platform as input into the process. He interpolated
the location of the sensor from the trajectory for any epoch of
interest to develop the sensor position and the trajectory vec-
tor. The vector was computed between the lidar point of inter-
est and the sensor position, which was then broken down into
horizontal and vertical components and used in the adjust-
ment process. Using this procedure, significant improvements
to the data relative fit were observed.
Literature has shown benefits to having a known lidar sen-
sor model for the end user that promotes error propagation
and data adjustment. Such a model provides the user a better
understanding of the quality of their datasets, allows the
user to improve the accuracy of the lidar data through block
adjustment, and allows the user to register and fuse other
datasets to the lidar. However there are several hurdles to
be crossed to realize such benefits. First, many of the sensor
models used in the initial lidar processing from raw data to
point clouds are held as proprietary by the system develop-
ers. Second, there are unique aspects to each sensor, causing
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
July 2015
545