PE&RS July 2015 - page 544

Σ
σ σ
σ
σ σ
σ
ii
X X Y X Z
Y Y Z
Z
i
i i
i i
i
i i
i
sym
=
2
2
2
.
(1)
While it may be possible to compute the absolute uncer-
tainties of all points in a point cloud and store them (
ii
) with
the data, it would become difficult or impossible to meet the
requirements for calculating relative (point-to-point) uncer-
tainty by forming and directly storing the full ground covari-
ance matrix. Relative accuracies provide insight into the accu-
racy between point pairs, ignoring the errors that are common
(correlated) between the points and providing insight into the
accuracy of the distance and direction of a vector between
the points of interest. If relative uncertainties are required,
one must have access to off-diagonal cross-covariance data
between points as shown in Equation 2, where
Rel
ij
is the (3
× 3) relative uncertainty covariance matrix for points
i
and
j
,
ii
and
jj
are the (3 × 3) error covariance matrices for point
i
and
j
, respectively,
ij
is the cross-covariance between points
i
and
j
, and
is the full covariance matrix for a set of
n
points.
The direct-storage solution for full covariance matrix repre-
sentation requires the storage of 3
n
(3
n
+ 1)/2 elements. For
example, full-matrix representation for the four points in Fig-
ure 1 requires the storage of 78 matrix elements as opposed
to the storage of only 24 elements if only absolute uncertainty
is required. Full covariance matrix representation for one
million 3
D
points would require the storage of more than four
trillion elements.
i
i
= +
Σ
Σ
i
j
i
Σ
Σ
l
j
j
j
− −
Σ
,
(
)
Re
T
n
n
ij
n n
×
=
Σ
Σ Σ
Σ
Σ
Σ
3 3
11 12
1
22
2
.
sym
nn
ij
X X X Y X Z
Y X YY Y Z
i j
i j
i j
i j
i j
i
Σ
Σ
σ
σ
σ
σ
σ
σ
=
j
i j
i j
i j
Z X Z Y Z Z
σ
σ
σ
(2)
An alternative to direct storage is to store metadata re-
quired for a detailed physical model that is specific to the
sensor used to capture the dataset. Examples of this approach
have been discussed in Schenk (2001). This has the advantage
that it uses the sensor model and metadata that generated the
three-dimensional point cloud and thus provides a rigorous
result. However, every sensor and its associated metadata are
unique, and consequently, a unique model and the implemen-
tation of its associated storage schema must be generated for
every sensor and be deployed to all exploitation software that
will use the data. To support the adjustability of datasets and
the development of full covariance matrices among points,
a physical model must also be able to store the correlations
between its parameters, and the temporal correlations (i.e.,
cross-correlations) of a given parameter; unfortunately the
storage of these correlations is often overlooked. Furthermore,
lidar vendors today generally consider their models propri-
etary and do not make them available to the public. Consider-
ing these difficulties, even though a physical sensor model
represents the most rigorous approach, it does not support
widespread implementation.
An additional alternative is to assemble a model that ap-
proaches the rigor and utility of the detailed physical sen-
sor model while allowing for standardization of metadata
and ease of storage. This document recommends such an
alternative in the Universal Lidar Error Model (
ULEM
), which
was originally presented by Theiss
et al
. (2009); it was, to
the authors’ knowledge, the first model designed to support
the entire span of four geopositioning scenarios listed at the
beginning of this section. To support these scenarios,
ULEM
provides a standardized set of adjustable parameters, provides
for the storage of uncertainties of those parameters, and pro-
vides for modeling of the correlations among parameters both
within and among data units.
The next section provides a literature review of related
work in lidar sensor modeling. It is followed by discussion of
the
ULEM
concept and implementation, including its enhance-
ments over the years since its inception. The paper proceeds
with an experiment with real data to verify the
ULEM
model
with respect to the first geopositioning scenario listed at the
beginning of this section, i.e., the absolute geolocation, and
associated predicted uncertainty, at a point. Finally, the paper
concludes with a summary.
Previous Work and Relationship to Current Effort
The goal of
ULEM
is to provide a common mechanism for the
error propagation and data adjustment of lidar datasets. Nei-
ther of these concepts is new to lidar literature, and both are
based on sensor modeling. This section provides an overview
of previous work in these areas, relates the
ULEM
efforts to
these previous works, and then explains unique aspects of the
ULEM
implementation.
A
sensor model
is the set of mathematical relationships of
sensor characteristics required for the reconstruction of object
space points from the corresponding sensor measurements
(Mikhail, 2001). For lidar, sensor models are required to trans-
form the raw sensor measurements such as pointing, orienta-
tion, position, and time into a geospatial product such as a
point cloud. The idea of lidar sensor modeling, including the
investigation of the individual components of lidar collection
systems, is well documented. An early example is Vaughn
et
al
. (1996) where georeferencing equations were developed
and error contributors were discussed. Baltsavias (1999)
explored the basic formulas and relations for laser ranging
and then expanded into the relationships among the other
components (e.g., pointing) that comprise a complete airborne
lidar system, thereby describing how the various constitu-
ents contribute to the composition of the final lidar product.
Schenk’s (2001) primary focus was on modeling and analyz-
ing systematic errors in lidar data. In order to do this, he
decomposed his lidar model and accounted for the individual
contributors. For lidar, the sensor model is often referred to
as the direct georeferencing equation and described in Filin
(2003) and Goulden (2010) as:
x
y
z
X
Y
Z
R R R
l
l
l
o
o
o
w G INS
x
y
z
=
+
δ
δ
δ
+
R R
m s
0
0
ρ
(3)
where the final ground point (
x
l
,y
l
,z
l
) is determined by com-
bining the sensor position (
X
o
,Y
o
,Z
o
), with the vector defined
by the range (
ρ
) which is rotated for pointing from the orienta-
tion of the
INS
(
R
INS
) and the laser system pointing (
R
S
). In the
process, coordinate transformations must be accounted for,
consisting of
R
W
(rotation from local ellipsoidal system into
WGS
84) and
R
G
(rotation from the local vertical frame to the
ellipsoidal reference frame). Offsets between the
GPS
phase
center and the laser firing point (
δ
x
,
δ
y
,
δ
z
) and
IMU
misalign-
ment angles (
R
M
) are included. Although there are multiple
variations on the equation above depending on the technique
544
July 2015
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
515...,534,535,536,537,538,539,540,541,542,543 545,546,547,548,549,550,551,552,553,554,...602
Powered by FlippingBook