October 2019 Layout Flipping Full - page 774

2 m for
LR
images. The
LR
images were multispectral images,
while the
HR
images were panchromatic images.
Experiment Details and Results
In this subsection, the
LR
multispectral image pairs were
firstly composited to single-channel images. After that, the ob-
tained image pairs were super-resolved with different image
reconstruction models. The original
HR
images were taken to
generate reference
DSM
, and the reconstructed
HR
images were
taken to generate reconstructed
DSMs
, with 0.5 m resolution.
It is worthwhile to note that the
RPC
files of the reconstructed
images needed to be regenerated from the
RPC
files of the
original
LR
images.
In this subsection, we selected two different regions to
display the experiment results. One was building region,
which was used to demonstrate the robustness of the pro-
posed framework in the building region; the other one was
a complex terrain which contained forested and easy-mis-
matched areas. The visual comparison results are shown in
Figure 6, and the quantitative evaluation results are shown in
Table 4. Also, an extra experiment, which generated
LR DSMs
from composited
LR
image pairs and upscaled four times to
the resolution of the reference
DSMs
, was set as comparison
experiment for real image situations.
Experiment Analysis
The visual comparison in Figure 6 and quantitative results
in Table 4 also had the same experimental conclusion as the
simulated experiments. Moreover, as shown in region B of
Figure 6, even though the statistical results of
RMSE
and
MRE
were abnormal at the complex terrain, the visual comparison
and quantitative results still confirmed that the proposed
framework could generate better subpixel level
DSMs
. There-
fore, the proposed framework was certainly effective and
useful in real image conditions.
Besides, it should be illustrated that the
DSMs
in real image
data among these models were less satisfactory than the simu-
lated experiments. For example, the
RMSE
deviation between
VDSR
-
DSM
and
SRMD
-
DSM
in region A and B were only about
0.09 m and 0.8 m. The main reasons are two-fold:
Firstly, the
SR
models adopted were actually designed for
bicubic downscaled
LR
images, while the degradation rela-
tionship between the
HR
images and comp
LR
were not such simple.
Secondly, the composited images also
tion difference with the panchromatic ima
jeopardized for obtaining well-performed results. Therefore,
though
SRMD
had taken some imitated noise and degradations
into consideration, the final results were still less significant
than the simulated experimental results.
Conclusion
In previous research work, subpixel level
DSM
generation
mainly relied on multisource data fusion or
DSM
super-
resolution, which raised the problem of acquiring different
sources and different types of data, or the demand of abun-
dant
DSM
samples. To shake off the reliance on multisource
data or plenty of
DSM
samples, this paper proposed a simple
but effective framework to generate subpixel level
DSM
with
high quality and high fidelity. Specifically, as
DSMs
from dense
matching methods depend a lot on the original aerospace im-
ages, we take the
CNN
-based method into the traditional
DSM
generation process as an image quality improvement strategy,
then generate a high-fidelity and subpixel level
DSM
. The
experiments verified the feasibility and practicality of the pro-
posed framework and presented the high-fidelity and subpixel
level
DSM
results. Besides, statistic results also confirmed the
visibly superiority of the proposed method to directly
DSM
upscale method under most terrain conditions.
However, the method proposed in this paper is just a first
trial of generating subpixel level
DSM
in the framework of
improving the original image quality through
SR
models. The
result is preliminary, so more experiments are needed to dem-
onstrate its practicability. Besides, there is still much room for
improvement. For example, a new
SR
model which exploits
the actual degradations between the corresponding
HR
and
LR
images needs to be developed to improve the performance.
Moreover, the reference in all of our experiments is better to
be replaced by ground truth to avoid unknown errors during
the Ref-
DSM
generation procedure. These issues will be ad-
dressed in our future work.
Acknowledgments
This work was supported by National Natural Science Foun-
dation of China with project number 41871368 and 41571434.
References
Aguilar, M. A., M. del Mar Saldana and F. J. Aguilar. 2014. Generation
and quality assessment of stereo-extracted DSM from GeoEye-1
and WorldView-2 imagery.
IEEE Transactions on Geoscience and
Remote Sensing
52:1259–1271.
Chai, D. 2017. A probabilistic framework for building extraction from
airborne color image and DSM.
IEEE Journal of Selected Topics
in Applied Earth Observations and Remote Sensing
10:948–959.
Chang, H., D.-Y. Yeung and Y. Xiong. 2004. Super-resolution through
neighbor embedding. Pages 275–282 in
Proceedings of the IEEE
Computer Society Conference on Computer Vision and Pattern
Recognition
.
Chi, M., A. Plaza, J. A. Benediktsson, Z. Sun, J. Shen and Y. Zhu.
2016. Big data for remote sensing: Challenges and opportunities.
Proceedings of the IEEE
1–13.
Dai, D., Y. Wang, Y. Chen and L. V. Gool. 2016. Is image super-
resolution helpful for other vision tasks?
Proceedings of the IEEE
Winter Conference on Applications of Computer Vision (WACV).
Dong, C., C. C. Loy, K. He and X. Tang. 2016. Image super-resolution
using deep convolutional networks.
IEEE Transactions on
Pattern Analysis and Machine Intelligence
38:295–307.
d J. Sun. 2016. Deep residual learning for
Pa
ges 770–778 in
Proceedings of the IEEE
ter Vision and Pattern Recognition
.
X. Gu and H. Cai. 2017. Single-image
super-resolution for remote sensing data using deep residual-
learning neural network. In
Proceedings of the International
Conference on Neural Information Processing
. Cham,
Switzerland: Springer
.
Jeon, D. S., S.-H. Baek, I. Choi and M. H. Kim. 2018. Enhancing the
spatial resolution of stereo images using a parallax prior. Pages
1721–1730 in
Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition
.
Kim, J., J. K. Lee and K. M. Lee. 2016. Deeply-recursive convolutional
network for image super-resolution. Pages 1637–1645 in
Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition
.
Kim, J., J. K. Lee K. M. Lee. 2016. Accurate image super-resolution
using very deep convolutional networks. Pages 1646–1654 in
Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition
.
Lei, S., Z. Shi and Z. Zou. 2017. Super-resolution for remote sensing
images via local–global combined network.
IEEE Geoscience and
Remote Sensing Letters
14:1243–1247.
Li, Y., Y. Zhang, X. Huang, Z. Hu and J. Ma. 2018. Large-scale remote
sensing image retrieval by deep hashing neural networks.
IEEE
Transactions on Geoscience and Remote Sensing
56:950–965.
774
October 2019
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
699...,764,765,766,767,768,769,770,771,772,773 775,776,777,778
Powered by FlippingBook