12-19 December Full - page 610

incorporated the
LMM
and particle swarm optimization (
PSO
)
to develop
LMM
constrained
PSO
(
LMMC-PSO
) for endmember
extraction from highly mixed data. Each particle in
LMMC-PSO
moves in the search space according to the
LMM
, rather than
with a velocity. In this paper, we optimize a dictionary learn-
ing (
DL
) method and incorporate it with endmember extrac-
tion to refine the endmember accuracy.
DL
has similar math-
ematical models and physical meanings with
LMM
(Liu
et al.
2018). Each mixed pixel is obtained by linearly consisting
of a few endmembers in a certain proportion. Therefore, the
endmember extraction problem can be transformed into
DL
problem (Hong
et al.
2017). K-Singular Value Decomposition
(
K-SVD
) (Romberg 2008) is a classic
DL
method for achieving
sparse signal representations.
K-SVD
is an iterative method that
alternates between sparse coding of the examples based on
the current dictionary and a process of updating the diction-
ary atoms to fit the data better. This paper draws on the role of
K-SVD
and proposes an optimization method for endmember
extraction, which contains three steps: observed data selec-
tion; eliminate the background noise; perform column-by-
column iteration on the endmembers based on
K-SVD
. The
proposed method can further improve the accuracy of end-
members based on other endmember extraction methods.
Methodology
The proposed method transforms the endmember extraction
problem into a
DL
problem.
K-SVD
optimizes the endmember
extraction results by reducing the error between the estimated
image and the original image. The section “
LMM
” presents
the
LMM
and the optimization problem. The section “
K-SVD
briefly reviews the
K-SVD
algorithm. Finally, the section “Op-
timization Endmember Extraction Method” introduces the
proposed optimization method in detail.
LMM
LMM
ignores the multiple scattering between features and
regards mixed pixels as a linear combination of endmembers
and their corresponding abundance. Mathematically, the
model is given by:
x
=
As
+
w
(1)
where
x
R
L
denotes the observation vec
with
L
spectral bands,
A
= [
a
1
,
a
2
, …,
a
P
]
R
×
ber matrix consisting of the spectrum of all signatures con-
tained in the image, and
s
R
P
is the corresponding abundance
vector, which means the contribution of each endmember to
this mixed pixel,
w
R
L
is the residual vector.
P
is the number
of endmembers. Each column vector of
A
corresponds to a
spectral vector of an endmember. Correspondingly, when
HSI
has
N
pixels:
X
=
AS
+
W
(2)
where
X
= [
x
1
,
x
2
, …,
x
N
]
R
L
×
n
,
S
= [
s
1
,
s
2
, …,
s
N
]
R
P
×
N
, and
W
= [
W
1
,
W
2
, …,
W
N
]
R
L
×
N
. Obviously, in order for the results to
be physically meaningful, the abundance vector in Equation
1 should satisfy two constraints: 1) Abundance Nonnegativity
Constraint (ANC), (
s
i
0,
i
= 1, …,
P
) and 2) Abundance Sum-
to-One Constraint (ASC), (
P
i
=1
s
i
= 1,
i
= 1, …,
P
).
The ground object distribution is complex and homog-
enous, and the number of distinct materials in a scene will
be much larger than in a single pixel (Keshava and Mustard
2002). Therefore, the endmember will be sparsely distributed
in the original data, and the corresponding abundance matrix
will also be sparse (Zhong
et al.
2016). Combing sparsity, the
l
0
norm of the signal is calculating the number of nonzero ele-
ments in the signal, and
δ
0 is the error tolerance due to noise
and modelling errors. Therefore, Equation 2 should be solved
by the following optimization problem:
min
,
s
s
X AS s
0
2
0
subject to
− ≤ ≥
F
δ
(3)
Equation 3 is an optimization problem for the sparse
solution and belongs to a class of combined search problem,
which is NP-hard. Consequently, we use convex approxima-
tions by transforming the
l
0
norm into the
l
1
problem, and the
optimization problem is thus (Donoho 2006):
min
,
s
s
X AS s
1
2
0
subject to
≤ ≥
F
δ
(4)
There are many
DL
methods can solve Equation 4, and in this
paper, we choose the
K-SVD
to obtain the endmember matrix.
K-SVD
K-SVD
(Aharon, Elad, and Bruckstein 2006) is a popular
DL
algorithm based on
SVD
. The performance of the dictionary
depends on how well the image structure matches the dic-
tionary (Ramirez and Sapiro 2012). The
SVD
decomposition
of the error matrix is used to solve the dictionary atom and
sparse representation coefficient with the largest eigen value
and its corresponding eigen vector. Because the initial value
of the dictionary is not optimal, there is an error between the
reconstructed data and the original data. The
K-SVD
is used to
solve the sparse representation problem in Equation 5:
min
,
,
A S
X AS
s
∀ ≤
F
i
i
T
2
0 0
subject to
(5)
where
X
denotes the signals,
A
denotes the dictionary matrix.
S
denotes the representation coefficients of signal
X
. The
K-
SVD
algorithm optimizes the dictionary matrix and the expres-
sion coefficient matrix column by column to reduce the error
under the condition of satisfying the signal sparsity; during
the iteration, coefficients are updated synchronously with the
dictionary to ensure the correspondence between both. After
the iteration, the dictionary that satisfies the error requirement
y. The specific process of the
K-SVD
algo-
(Aharon, Elad, and Bruckstein 2006).
Extraction Method
Samples Selection
Mixed pixel is usually a mixture of only a few endmembers.
For a mixed pixel, the higher abundance of a certain endmem-
ber, the higher of its credibility or contribution. Similarly, if
the abundance of a certain endmember is low, its credibility
or contribution in the mixed pixel is relatively low. Therefore,
the observed data set should be selected by an appropriate
threshold, which is obtained by the abundance value. Only
those samples with higher confidence are retained, which can
improve accuracy.
First, the value in the abundance matrix smaller than the
threshold
δ
is set to zero, that is:
s
if
i
P j
N
i j
i j
,
,
,
(
, ,...,
, ,..., )
=
< =
=
0
1 2
1 2
s
,
δ
(6)
where
s
i,j
represents the value of abundance matrix at coordi-
nate (
i,j
). Then, sum all columns of the new
S
matrix (here,
S
matrix denotes the abundance matrix
S
set to zero according
to the (
i
,
j
) obtained by Equation 6) to get a row vector
s
r
R
N
.
According to the position of zero in
s
r
, the corresponding
column in observed data
X
would not be required in the next
880
December 2019
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
581...,600,601,602,603,604,605,606,607,608,609 611,612,613,614,615,616,617,618,619,620,...648
Powered by FlippingBook