Visual and Machine Combined
Spatial information from remote sensing is often derived by
a combination of visual and machine methods. It is typi-
cal for a variety of features to be mapped by one of many
machine based approaches and then an analyst reviews the
results and makes changes in the results based upon visual
interpretation using the same original data sources or by
different, often finer spatial resolution, imagery. These qual-
ity control or quality assurance activities are much more fre-
quently found in operational or applied science efforts than
in research environments. The following sections present
several of these hybrid approaches.
The US National Land Cover Database (NLCD) is an activ-
ity by several government agencies (primarily the USGS)
to provide accurate LULC Landsat derived products for the
entire country at multiple timeframes beginning with 1992
followed by 2006, 2011, and 2016. The LULC information
extraction is conducted by a relatively complex and very
rigorous machine based process typically employing two
seasons of Landsat imagery, leaf on and leaf off, as well as
various ancillary information in a data mining approach.
The NLCD collects 16 classes for most of the US and has an
additional four classes in Alaska (Homer et al. 2007).
The process of extracting spatial information for the NLCD
has evolved and improved over time. However, the end prod-
uct is visually reviewed by an analyst and changed as appro-
priate in a process called refinement. The percentage of pixels
changed is small but it is an example of the combined visual
and machine approach to provide more accurate spatial data
(Jin et al. 2013, Wickham et al. 2013, Homer et al. 2015).
The NOAA Coastal Change Analysis Program (C-CAP) cre-
ates LULC and LULC Change (LULCC) information for the
coastal areas of the US. The program has a very ambitious
set of 24 classes including many similar wetland communi-
ties. The primary data for this effort are the various Land-
sat sensors supplemented by other remote sensing products
and ancillary data.
The machine analysis procedures for C-CAP information
extraction have varied over time as well as by the contrac-
tor conducting the information extraction. The products
are subject to a rigorous thematic accuracy assessment.
The 2006 - 2010 LULCC products had an overall accuracy
of about 85% (McCombs et al. 2016). The process always
includes a final visual review and editing to improve the
product quality. The visual review or refinement is consid-
ered to be a critical component of the data production but is
also a very expensive component, estimated in a personal
communication to be 70% of the total time and budget.
Four remote sensing based LULC mapping efforts were
recently completed independently for the country of Malawi
for climate change and carbon tracking. Those efforts were
funded by the United Nations Food and Agricultural Orga-
nization (UNFAO), World Bank, the Japan International
Cooperation Agency (JICA), and the United States Agency
for International Development (USAID). All four projects
produced maps for multiple years to assess LULCC. A
common thread through these efforts is that each completed
map used 2010 spaceborne imagery, allowing a comparison
of methods and results (Haack et al. 2015).
An image segmentation approach was selected by UNFAO.
This method, also called object based as opposed to pixel
based, divides the image into spatially continuous and
spectrally homogeneous regions or objects (Im et al. 2008,
Blaschke 2010). The segmentation produces a vector layer
of objects that represent regions with similar pixel values
with respect to various characteristics or computed proper-
ties such as color, intensity, or texture and assumed to be
the same basic LULC (Budreski et al. 2007). This method
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
December 2017
803
Figure 4. USDA area frame national crop assessment program air
photo based sample unit for enumerator.