October 2019 Layout Flipping Full - page 747

patches (512×512 in our experiment), which lead to the lack
of context information. Note that
ResNet
+
FCN
and
LMB-CNN
+
FCN
can be directly trained with a low-resolution feature map
extracted from the whole remote sensing image. Second, the
end-to-end learning strategy of Deeplab v3+ has more risk of
overfitting, whereas our two-step learning benefits from the
regularization of the classification task.
Furthermore, our
LMB-CNN
+
FCN
is better than
ResNet
+
FCN
,
which demonstrates that the multi-branch architecture of
LMB-
CNN
is more suitable than
ResNet
-101 for built-up extraction of
remote sensing images.
Working with a Smaller Training Set
We conducted the experiment on the smaller training set. We
took 60% of the original training set, that is, 10 240 × 10 240
images, to train the network model from scratch. The new
trained network achieved the overall accuracy on the test set
of about 98.7%, and both rates of false alarm and missing
alarm decreased only 0.01%. The experimental results show
that the proposed method can work with a smaller training set.
Generalization Ability of Our Proposed Built-Up Area Detection Approach
To verify the generalization ability of this algorithm, we test
the proposed algorithm on four WorldView-3 images without
Table 5. Evaluation indexes of unsupervised algorithms and ours.
No.
CGEO
(Li et al. 2015b)
MHEC
(Kovács and Szirányi 2013)
PanTex
(Pesaresi
et al.
2009)
MBI
(Huang and Zhang 2012) Ours-single
LMB-CNN +FCN
(Ours)
User.Acc
1
0.4727
0.3108
0.3321
0.1768
0.7916
0.7640
2
0.9631
0.3266
0.2937
0.1641
0.8162
0.8094
3
0.4786
0.6635
0.4480
0.3388
0.7765
0.7830
4
0.5124
0.7645
0.4542
0.4378
0.7879
0.7851
5
0.9537
0.9774
0.9675
0.9396
0.9254
0.9186
6
0.6738
0.9060
0.6342
0.6922
0.8369
0.8299
7
0.1938
0.1849
0.1184
0.1303
0.837
0.8155
8
0.0000
0.0875
0.0658
0.0992
0.6981
0.7052
9
0.7291
0.8468
0.5508
0.4224
0.75
0.7767
10
0.7442
0.8269
0.4165
0.4657
0.8188
0.8086
All
0.6178
0.5779
0.4299
0.4022
0.8303
0.8285
Prod.Acc
1
0.9851
0.9380
0.7448
0.3657
0.9325
0.9387
2
0.0458
0.9649
0.7906
0.4487
0.8601
0.8669
3
0.9789
0.9067
0.7005
0.5514
0.9674
0.9694
4
0.9361
0.7283
0.6263
0.4035
0.9706
0.9771
5
0.9680
0.8202
0.7352
0.5096
0.9895
0.9904
6
0.9685
0.8747
0.8972
0.5129
0.9756
0.9749
7
0.9890
0.8977
0.4883
0.4272
0.8388
0.8429
8
0.0000
0.5480
0.6693
0.4079
0.9251
0.9431
9
0.9840
0.9438
0.7516
0.4141
0.9868
0.9879
10
0.9716
0.9165
0.8872
0.4653
0.9777
0.9797
All
0.9039
0.8529
0.7445
0.4693
0.9664
0.9691
Overall.Acc
1
0.9341
0.8733
0.8963
0.8617
0.9815
0.9792
2
0.9478
0.8895
0.8848
0.8452
0.9818
0.9816
3
0.8869
0.9425
0.8791
0.8415
0.9677
0.9689
4
0.8462
0.9201
0.8186
0.8204
0.9532
0.9532
5
0.9665
0.9158
0.8773
0.7784
0.9617
0.9587
6
0.9242
0.9673
0.9060
0.8916
0.9675
0.9659
7
0.7634
0.7671
0.7621
0.8035
0.9814
0.9800
8
0.9606
0.7640
0.6233
0.8355
0.9818
0.9827
9
0.9376
0.9629
0.8591
0.8115
0.944
0.9516
10
0.9536
0.9647
0.8264
0.8632
0.9694
0.9677
All
0.9121
0.8967
0.8333
0.8352
0.9658
0.9690
Mean IOU
1
0.6997
0.5852
0.5948
0.4970
0.8646
0.8528
2
0.4967
0.6030
90
0.8507
0.8503
3
0.6739
0.7788
87
0.8604
0.8648
4
0.6570
0.7522
68
0.857
0.8577
5
0.9339
0.8380
54
0.9253
0.9197
6
0.7852
0.8821
0.7412
0.6501
0.8908
0.8863
7
0.4713
0.4677
0.4303
0.4548
0.8508
0.8434
8
0.4803
0.4203
0.3385
0.4598
0.821
0.8293
9
0.8231
0.8811
0.6526
0.5311
0.8379
0.8558
10
0.8377
0.8645
0.5999
0.5789
0.8844
0.8792
All
0.7398
0.7044
0.5947
0.5503
0.8857
0.8858
Table 6. The time consumption of unsupervised algorithms and ours.
CGEO
(Li
et al.
2015b)
MHEC
(Kovács and Szirányi 2013)
PanTex
(Pesaresi
et al.
2009)
MBI
(Huang and Zhang 2012) Ours-single
LMB-CNN +FCN
(Ours)
Average time
199.7
4435.2
9538.7
942.3
3.0
3.0
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
October 2019
747
699...,737,738,739,740,741,742,743,744,745,746 748,749,750,751,752,753,754,755,756,757,...778
Powered by FlippingBook