PE&RS October 2018 Full - page 608

f
=
l
m
x
e
x
e
x
~
N
(
0
,
σ
2
x
Q
x
),
(4)
where
f
=
0
denotes the
m
×1 fictitious observation vector,
e
x
denotes the unknown
m
×1 random error vector,
σ
2
x
denotes the unknown variance component,
Q
x
denotes the
m
×
m
symmetric positive-definite cofactor
matrix, and
P
x
=
P
x
–1
denotes the
m
×
m
weighted diagonal matrix. In equa-
tion (4), the camera-external parameters are viewed as ficti-
tious observations.
When the total least squares approach is characterized
by minimizing the total sum of squared residuals (
TSRR
), the
combination of Equation 3 and Equation 4 can be resolved by
making the equivalent Lagrange target function stationary:
Ø(
e
,
E
A
,
e
x
,
x
,
λ
,
ϑ
) =
e
T
Pe
+
e
T
A
(
I
m
7
P
)
e
A
+
e
T
x
P
x
e
x
(5)
+2
λ
T
(
y
– (
A
+
E
A
)
x
+
e
) +
2
ϑ
T
(
f
I
m
x
+
e
x
),
where
λ
,
ϑ
denotes the
n
×1 vector of unknown Lagrange mul-
tipliers. The first-order condition equations then read as
=
=
=
=
=
=
Ø Ø Ø Ø Ø
λ
Ø
ϑ
e e e
x
A x
T T
T T T T
0
,
(6)
where hats and tildes are placed over the unknown non-ran-
dom and random vectors or matrices, respectively, to indicate
the particular quantities that satisfy the first-order condition
equations. The sufficient condition for a multivariate mini-
mum can be fulfilled because
1
2
0 0
0
0
0 0
∂ ∂
( )
=
Ø
e
e
e
e e e
P
I
m
7
P
P
A
x
A x
x
T T T
is positive definite.(7)
Combining the mathematical expressions
Ø Ø Ø
e e e
A x
T T T
,
,
can derive the random errors:
e P
= −
1
λ
,
(8a)
e xP
A
=
(
)
ˆ
1
λ
,
(8b)
e P
x
x
= −
1
ˆ
ϑ
.
(8c)
From Equation 8b,
E
A
can be represented as:
E ex
A
= −
ˆ
T
.
(9)
The Lagrange multiplier vector is then given by Equations
8a, and 9. and the mathematical expressions


Ø Ø
λ
x
T T
,
inthe
following equation:
A
P x
T
T
ˆ ˆ
λ
λ λ
ϑ
= − 
1
,
(10a)
P y Ax
x x
(
)
= +
ˆ
ˆ
λ
1
T
,
(10b)
such that
λ
= −
Pe
.
(11)
From Equation 8c, the mathematical expression
Ø
ϑ
T
,
ϑ
ˆ will
be obtained using
ˆ
ˆ
ϑ
=
(
)
P f I x
x
m
.
(12)
In this case, the camera-external parameter vector will be
easily obtained with Equations 10a, 11, and 12):
ˆ
ˆ
ˆ
)
/
x A P y Ax x x P f
P I
P
x
x
=
(
)
+
 +


− 
T
T
T
(
1
1
1
m
λ λ


. (13)
Because the
WTLS
-solution from Equation 13 contains the
unknown camera-external parameters
x
, an iterative proce-
dure should be adopted. In addition, it is worth noting that
the weight is different in the y-parallax of corresponding
points according to the image resolution. Thus, we must set
the weight matrix
P
as
k
(
σ
2
0
/
σ
2
p
)
I
n
,
k
> 0, where
σ
2
p
represents
the variance component of the observations of the corre-
sponding points.
P
x
should be set as the identical matrix. If
the random errors
E
A
in the coefficient matrix are not consid-
ered, Equation 13 equals the ordinary least squares solution.
When the rover’s stereo vision system works on a planetary
surface, we extract the corresponding points with the
SURF
(Bay,
et al
., 2006) algorithm. Equations 3 and 4) can then be
constructed according to the y-parallax of these correspond-
ing points, and the initial values of camera-external param-
eters denoted
x
0
can be obtained before the launch. We set the
variance
σ
2
p
as 1/3. Therefore, the initial values of the weight
matrix
P
and
P
x
will be designed (the identical matrix). Fi-
nally, the incremental parameters can be obtained iteratively
by using Equation 13 until convergence to a certain threshold.
During the iterative process, the threshold is set as 0.003, and
the variance component ˆ
σ
2
p
, weight matrix
P
and incremental
parameters ˆ
x
will change. The estimated value of the un-
known variance component ˆ
σ
2
0
can be represented as:
y Ax x x y Ax
ˆ
ˆ
ˆ ˆ
ˆ
σ
0
2
1
2
=
(
)
+
(
)
(
)
(
)
tr
T
T
n m
,
(14)
where tr(·) denotes the trace of the matrix. The errors of
the camera-external parameters can then be approximately
(
A
T
PA
)
–1
.
Planetary Rover’s Visual Localization In Multi-Stations
Planetary Rover Pose Estimation Problem Definition
The task of the autonomous planetary rover is to reconstruct
the 3D structure of the scene using stereo vision and simul-
taneously to determine its location on the planetary surface.
First, when the camera-intrinsic and -external parameters are
known, the 3D coordinates of the selected feature points in
the camera coordinate system
F
L
can be obtained by using the
forward intersection (Wang, 1990) or triangulation technique.
The coordinates of the features will be incorporated into the
planetary rover’s body coordinate system
F
r
within the coor-
dinate system transformation framework. The tie points in
the varying stations can then be acquired by the Affine-SIFT
algorithm (Morel and Yu, 2009), where the images undergo
smooth deformations. The role of the tie points is to associate
the images in the current station with the images in the previ-
ous station. Therefore, a new rover pose estimation model
will be set up by those tie points. Finally, the translation and
rotation parameters between the body coordinate systems can
be obtained by using 3D similarity transformation based on
608
October 2018
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
591...,598,599,600,601,602,603,604,605,606,607 609,610,611,612,613,614,615,616,617,618,...670
Powered by FlippingBook