September 2019 Full - page 668

coefficient measure are summarized in Table 4. The classifica-
tion maps of the labeled pixels are shown in Figure 14. It can
be seen that the proposed
CSCSR
scheme achieves the best per-
formance for each class, where the parameters were set at
λ
1
=
1 and
λ
1
= 0.1. The kernel collaborative representation method
with Tikhonov regularization and composite kernel achieves
the closest performance to our method.
The results for different
λ
1
and
λ
2
values are shown in
Figure 15. When the percentage of the training data is varied
as {5%, 10%, …, 30%}, the overall accuracy increases from
99.27% to 99.96%. Although increasing the number of train-
ing samples can lead to higher classification accuracy, this
will still increase the computational burden. For comparison
with other methods, 10% of the pixels were used as train-
ing samples. When the patch size varied as {1, 3, 5, 7, 9}, the
corresponding overall accuracies were respectively {96.06%,
99.86%, 99.89%, 98.75%, 98.22%}. When the patch size is
3×3 or 5×5, better classification accuracy is obtained.
In addition, how the smoothness-regularization term affects
the classification results is demonstrated in Figure 16. With the
increase of the regularization parameter
λ
2
, the sparse coef-
ficient vector becomes denser. That means that more training
samples in the dictionary are assigned nonzero coefficients,
hence leading to better representation results using appropri-
ate
λ
2
. However, values of
λ
2
that are too large can result in very
smooth
SC
vectors, and thus higher classification errors.
Experimental Results: Salinas
The third
HSI
is the Salinas image acquired by the AVIRIS sen-
sor over Salinas Valley, California, with a high spatial resolu-
tion of 3.7 m/pixel. The size of this image is 512 × 217 pixels,
and it has 204 spectral bands (after 20 water-absorption bands
were removed). A listing of 16 classes and the numbers of the
Table 3. Listing of nine classes in the
University of Pavia hyperspectral-image
data. Training and testing sample counts are
shown for each class.
Class
Samples
No. Name
Train Test
1 Asphalt
664
5967
2 Meadows
1865 16784
3 Gravel
210
1889
4 Trees
307
2575
5
Painted metal
sheets
135
1210
6 Bare Soil
503
4526
7 Bitumen
133
1197
8 Self-Blocking Bricks
369
3313
9 Shadows
95
852
Total
4281 38495
Table 4. Classification accuracy (%) for the testing set of the University of Pavia
hyperspectral-image data. Boldface indicates the best-performing classifier(s) for each
class.
Notes
: SVM = support vector machine; OMP = orthogonal matching pursuit;
CRC = collaborative representation; SOMP = simultaneous orthogonal matching
pursuit; NRS = nearest regularized subspace; JCR = joint collaborative representation;
KCRT-CK = kernel collaborative representation with Tikhonov regularization and
composite kernel; SC = sparse coding; CSC = collaborative sparse coding; CSCSR =
collaborative sparse coding with smoothness regularization; OA = overall accuracy;
AA = average accuracy.
Class SVM OMP CRC SOMP NRS JCR KCRT-CK SC CSC CSCSR
1
98.27 98.43 91.60 98.07 99.58 99.51 98.96 96.35 99.19
99.73
2
99.91 99.54
100
99.51 99.97 99.98 99.97 98.77 99.75
100
3
92.17 93.38 80.10 95.24 96.29 96.45 98.62 84.55 88.37
99.74
4
97.97 97.06 96.08 97.64 94.74 96.77 98.95 97.15 98.49
99.71
5
99.42 99.75
100 100
99.92
100
100
99.69
100 100
6
97.95 96.42 97.30 95.67 94.59 95.16 98.80 91.44 96.71
100
7
96.32 95.32 99.58 96.49 96.66 96.32 99.75 90.58 95.25
100
8
95.17 86.21 97.25 85.84 97.49 98.49 99.03 93.88 97.43
99.28
9
97.18 98.12 32.98 97.07 99.65 99.53 97.54 98.33 99.22
100
OA 98.31 97.21 95.39 97.18 98.39 98.69 99.51 96.06 98.31
99.86
AA 97.15 96.03 95.92 96.17 97.65 98.03 99.18 94.53 97.16
99.83
κ
97.76 96.31 93.92 96.26 97.88 98.26 99.36 94.77 97.76
99.82
Figure 14. The University of Pavia hyperspectral-image classification results: (a) training set and (b) testing set. Classification
maps obtained by (c) joint collaborative representation, (d) kernel collaborative representation with Tikhonov regularization
and composite kernel, (e) support vector machine, (f) simultaneous orthogonal matching pursuit, (g) orthogonal matching
pursuit, (h) nearest regularized subspace, (i) sparse coding, (j) collaborative representation, (k) collaborative sparse coding,
and (l) collaborative sparse coding with smoothness regularization.
668
September 2019
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
611...,658,659,660,661,662,663,664,665,666,667 669,670,671,672,673,674,675,676,677,678,...702
Powered by FlippingBook