Road Network Extraction from High-Resolution
SAR Imagery Based on the Network Snake Model
Mehdi Saati and Jalal Amini
Abstract
Automatic road network extraction from satellite images is
currently considered to be an important research trend in the
field of remote sensing and photogrammetry. This paper pres-
ents a method for automatic extraction of road networks from
synthetic aperture radar (SAR) imagery. The method consists
of three steps. During the first step, road area candidates are
detected based on the fusion of extracted features: minimum
radiance, contrast, and direction of minimum radiance. In the
second step, several refinement criteria are used in order to
discard detected false candidates, and a thinning morphol-
ogy operator is applied on the road areas to extract the road
segments. Seed points of interest are then extracted to using
in a network snake model, which is employed in the third
step to connect the seed points in order to form the road net-
work. Minimizing the energy function of the Snake model and
then using a perceptual grouping algorithm is for discarding
redundant segments and avoiding gaps between segments is
used sequentially in the network snake model. The proposed
algorithm is tested on TerraSAR-X images with different
areas. The experimental results reveal that the proposed
method is effective in terms of correctness, completeness, and
quality. An accuracy assessment showed that the proposed
model is capable of achieving a quality index between 56 to
82.5 percent.
Introduction
As an essential part of a modern transportation system,
road network extraction has attracted increasing interest in
practical applications (Maboudi
et al.
, 2016). Road network
extraction is not only scientifically challenging but is also of
major practical importance in military and civilian areas, e.g.,
map updating (Mena, 2003; Nakaguro
et al.
, 2011; Sun
et al.
,
2014), navigation (Matkan
et al.
, 2014), and urban evolution
monitoring, as well as for disaster relief and response (Chen
et al.
, 2008). When handling these applications, the road po-
sition first needs to be determined before follow-up functions
can proceed. Furthermore, in image processing and remote
sensing, the road feature is the foundation of many complex
tasks, such as image registration (Dell’Acqua
et al.
, 2004) and
urban block outlining (Dell’Acqua
et al.
, 2009).
Today, high-resolution radar and optical images are used
in a wide variety of applications specifically for automatic
extraction of road networks. Motivating factors for using radar
images include all-weather imaging capability and day/night
data acquisition (Tupin
et al.
, 1998).
To manage the automatic road extraction task, three levels
of analysis should be considered: (a) Low-level road feature
extraction, which aims to extract information indicating
the likelihood of a particular pixel being located on a road
segment. This level generally includes the following meth-
ods: line detectors (Tupin
et al.
, 1998), classification-based
methods (Dell’Acqua and Gamba, 2001), and template
matching (Cheng
et al.
, 2011). (b) Medium-level road segment
extraction, in which local properties based on the output of
the extracted road features are tested to produce qualified
segments. Methods include the threshold-based technique
(Tupin
et al.
, 1998) and Bayesian analysis (Stilla and Hedman,
2010). This procedure can be skipped in a direct transition
from low-level to high-level processing. (c) High-level road
network construction, which aims to fill the gaps between the
road segments, discards any false segments, and connects the
road candidate portions from a global perspective. Common
approaches at this level include Markov random field (
MRF
)
models (Tupin
et al.
, 1998), graph-based method (Gerke and
Heipke, 2008), methods based on the active contour model
(
ACM
) (Bentabet
et al.
, 2003; Leninisha and Vani, 2015; Liu
et
al.
, 2013), the region growing-based method (Baumgartner
et
al.
, 1999; Lu
et al.
, 2014), dynamic programming (Dell’Acqua
and Gamba, 2001), the genetic algorithm (
GA
) (Jeon
et al.
,
2002), and particle filtering (Liu
et al.
, 2013).
The
ACM
is widely used to extract regions of interest
objects (Niu, 2006). Kass
et al
. (1988) introduced the snake ac-
tive contour model, in which the model performance depends
upon an initial contour drawn close to the edge of the object
of interest. Subsequently, various approaches have been
introduced based on the
ACM
. The balloon model proposed
by Cohen (1991) and Ünsalan and Sirmacek (2012) expands
an initially defined region until it reaches the boundary of a
road. Likewise, the geodesic active contour model (Caselles
et al.
, 1997; Mssedi
et al.
, 2011), region growing techniques
based on the
ACM
(Amo
et al.
, 2006), and the network snake
method (Butenuth and Heipke, 2012) have been introduced
for road extraction or updating the existing road database.
It is important to note that (Butenuth and Heipke, 2012)
updated the road vectors using airborne
SAR
image as well
as the existing road database, which was used as the initial
value for the network snake model. The topological prop-
erties of the road database are preserved exactly and no
change is possible whereas the existing road database (not
automatically detected) may contain surplus segments. To
obtain an accurate and precise road network based on snake
models, automatically extracted initial values are required.
In this study, the initial seed point values for each snake are
automatically extracted from
SAR
images and then the road
network is extracted. The following steps are used for road
network extraction from high-resolution
SAR
images:
1.
Road Area Detection:
Features describing road proper-
ties in high-resolution
SAR
images are extracted and
fused based on a fuzzy algorithm to detect the road
areas.
2.
Refinement:
Several spatial and spectral criteria are tak-
en into account to refine the detected road areas.
The Department of Remote Sensing Engineering, School of Surveying
and Geospatial Engineering, College of Engineering, University of
Tehran, Tehran, Iran. (
).
Photogrammetric Engineering & Remote Sensing
Vol. 83, No. 3, March 2017, pp. 207–215.
0099-1112/17/207–215
© 2017 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.83.3.207
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
March 2017
207