Therefore, the interior orientation parameters of satellite B
can be calibrated according to Equation 3.
Model and Procedures for Geometric Cross-Calibration
As shown in Figure 1, if there is no error in the exterior and
interior orientation parameters of satellites A and B,
p
0
and
p
1
should be positioned at the same location (Yonghua
et al
., 2014):
X
Y
Z
X
Y
Z
m R R
S
S
S A
A J
WGS
bo
=
+
2000
84
dy
J
camera
body
x
y
A
R
X
Y
Z
2000
1
tan
tan
ψ
ψ
=
+
X
Y
Z
m R R
S
S
S B
B J
WGS
body
J
2000
84 2000
1
R
camera
body
x
y
B
tan
tan
ψ
ψ
(4)
where (
X Y Z
)
T
is the position vector of
S
in the
WGS84
coordi-
nate system, (
X
s
Y
s
Z
s
)
T
A
is the position vector of satellite A in
the
WGS84
coordinate system,
R
D
C
denotes the rotation matrix
for converting the coordinate system from C to D (e.g., from
the camera coordinate system to the satellite body coordi-
nate system),
m
A
and
m
B
are scaling factors, and (
ψ
x
,
ψ
y
) is
a comprehensive representation of the interior orientation
parameters, defined as the look angles of pixels
p
0
and
p
1
,
respectively.
However, Equation 4 is usually invalid because of the
presence of exterior and interior errors. According to the
derivation by Yonghua
et al
. (2015), the positioning errors
caused by orbit position errors can be modeled as translation
errors, and those caused by attitude errors can be expressed as
a combination of translation and rotation errors. Therefore, an
offset matrix can be used to eliminate the deviation in the in-
tersection caused by exterior errors because it can compensate
for the translation and rotation errors (Radhadevi
et al
., 2011).
Moreover, Equation 5 can be adopted as a model for interior
calibration (Yonghua
et al
., 2014; Guo
et al
., 2014). There-
fore, the geometric cross-calibration model can be written as
Equation 6, under the condition that the interior orientation
parameters of satellite A have been calibrated accurately:
X
Y
Z
X
Y
Z
m R R
S
S
S A
A J
WGS
bo
=
+
2000
84
dy
J
camera
body
x
y
A
R
X
Y
Z
2000
1
tan
tan
ψ
ψ
=
+
X
Y
Z
m R R
S
S
S B
B J
WGS
body
J
2000
84 2000
0 1 2
2
0 1 2
2
1
R R
a a s a s
a s
b b s b s
b s
u camera
body
i
i
j
j
+ + + +
+ + + +
B
(5)
X
Y
Z
X
Y
Z
m R R
S
S
S A
A J
WGS
bo
=
+
2000
84
dy
J
camera
body
x
y
A
R
X
Y
Z
2000
1
tan
tan
ψ
ψ
=
+
X
Y
Z
m R R
S
S
S B
B J
WGS
body
J
2000
84 2000
0 1 2
2
0 1 2
2
1
R R
a a s a s
a s
b b s b s
b s
u camera
body
i
i
j
j
+ + + +
+ + + +
B
(6)
where
R
u
consists of
φ
u
,
ω
u
, and
κ
u
, as follows:
n c
s
s
R
u
u
u
u
u
u
u
=
−
−
cos
sin
sin
cos
cos
sin
sin
φ
φ
φ
φ
ω
ω
0
0 1 0
0
1 0
0
0
0
ω ω
κ
κ
κ
κ
u
u
u
u
u
u
cos
co
in
si
os
−
0
0
0
0 1
In Equation 6,
φ
u
,
ω
u
,
κ
u
, and
a
i
,
b
j
(
i
,
j
≤
5) are the param-
eters to be solved.
To improve the interior calibration accuracy of satellite B,
the geometric restriction, according to which that the conju-
gate points in the adjacent
CCD
arrays of satellite B should be
positioned at the same location, can be introduced into the
geometric cross-calibration model.
The parameters of the calibration model,
R
u
(represents the
three angles
φ
u
,
ω
u
,
κ
u
, and
a
i
,
b
j
in Equation 6, can be solved
using an alternate iteration strategy. After solving
R
u
, Equation
6 can be transformed into:
X X
Y Y
Z Z
m R
S
S
S B
B camera
WGS
x
y
−
−
−
=
84
1
tan
tan
ψ
ψ
B
.
(7)
Let
R
a a a
b b b
c c c
camera
WGS
84
1 2 3
1 2 3
1 2 3
=
, then the interior calibration
model can be written as:
c
c
a
a
a
c
c
c
X X
Z Z
b
b
x
y
x
y
s
s
x
1
2
3
1
2
3
1
2
0
tan
tan
tan
tan
tan
t
φ
φ
φ
φ
φ
+
+
+
+
−
−
−
=
+
an
tan
tan
φ
φ
φ
y
x
y
s
s
b
c
Y Y
Z Z
+
+
+
−
−
−
=
3
1
2
3
0
(8)
where tan
ψ
x
and tan
ψ
y
are modeled as Equation 5.
a
i
,
b
j
(
i
,
j
≤
5) and the coordinates of the ground object (
X Y Z
)
T
of all con-
jugate points in Equation 8 are the unknown parameters that
must be solved. Then, the error in Equation 8 after lineariza-
tion is as follows:
v A B
X
X
L p
n n
=
[
]
−
×
1
2
,
(9)
where
n
is the number of conjugate points,
X
1
is
(
da
0
…
da
i
db
0
…
db
j
)
T
;
X
2
is the correction of the ground coor-
dinates (
dX
1
dY
1
dZ
1
…
dX
n
dY
n
dZ
n
)
T
of the conjugate points;
A and B denote the coefficient matrices acquired by calculat-
ing the partial derivatives of
X
1
and
X
2
in Equation 8, and
p
n×n
denotes the weights of the observations. As described earlier,
the conjugate points from multiple satellites and from the
adjacent
CCD
arrays of one satellite will both be used for solv-
ing Equation 9. As the differences of geometry and radiance
between the images acquired through adjacent
CCD
arrays
placed in the same sensor are slight, the conjugate points from
adjacent
CCD
arrays can be extracted by image matching with
a higher accuracy than those from multitemporal or mult-isat-
ellite images. Therefore, different weights can be assigned to
the two observations according to their matching accuracies.
However, if we solve
X
1
and
X
2
directly by using Equation
9, two main problems may occur. First, the accuracies of (
X
,
Y
,
Z
) obtained through the forward intersection are severely
affected by the accuracies of the conjugate points when the
convergent angles between the adjacent
CCD
arrays are very
low, thus reducing the calibration accuracy. Second, once a
pair of conjugate points is introduced into the equation, three
unknowns will be added, ultimately leading to an excessive
number of unknowns and a heavy computational load.
Suppose that the interior orientation parameters of satellite
A have been calibrated. To simplify the solution, the detailed
procedure of geometric cross-calibration is as follows:
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
August 2018
487