are many uses for this photography but for the USDA, it is
primarily used to monitor compliance by farmers to agree-
ments on crop type or allowing fields to remain fallow. The
US Geological Survey (USGS) employed NAIP images in
a large regional study encompassing portions of several
states in the Marcelleus Shale region of the northeast US to
assess the impact of fracturing activities for oil and natural
gas (Slonecker and Milheim 2015). This detailed study was
based entirely on visual interpretation as well (Figure 2).
An important and growing remote sensing activity is
disaster assessment and response. For these applications,
typically fine spatial resolution images from spaceborne
sensors are employed because of their frequent temporal res-
olution facilitated by off nadir viewing and image acquisition
agreements with suppliers. Impacted countries can contact
the United Nations Platform for Space-based Information
for Disaster Management and Emergency Response and
providers have been generous in providing imagery at no
cost. Among many other examples, this imagery has been
effectively employed in situations including the tsunami in
Indonesia, earthquakes in Haiti and Nepal, and recently
Hurricanes Irma and Maria. The analysis of these images
was almost entirely visual (Kerle 2013, Kerle and Hoffman
2013) (Figure 3).
Globally, one of the largest application areas of remote sens-
ing is agriculture. These applications use a broad range of
sensors for crop mapping, yield forecasting, famine detec-
tion, and crop condition assessment, among other uses. The
USDA has established an area frame based upon different
spatial resolutions of remote sensing data to compile annual
agriculture statistics (Cotter et al. 2010). This process
begins with the construction of a spatial sampling frame-
work on a state by state basis where Landsat imagery is
visually examined to separate the landscape into seven land
use strata – primarily differentiating levels of agriculture
intensity. The process then divides the landscape into small-
er and smaller sampling segments employing finer spatial
resolution imagery. The end product is the random selection
of about 11,000 sample segments which are delineated on
aerial photographs for enumerators to take into the field
annually to collect detailed data, another visual interpreta-
tion effort (Figure 4). From these field samples, statistics are
aggregated to higher levels including by state and for the
entire country.
Machine Interpretation
The machine method of information extraction from remote
sensing data has many different names and procedures.
These procedures are sometimes called machine based im-
age analysis, but the terms computer, digital, or automated
classification are also employed. This process is generally
initiated by the compilation of spectral signatures often
from the imagery being used, (known as calibration), via
one of two methods: supervised or unsupervised. An ana-
lyst then applies a signature matching statistical decision
rule, such as maximum likelihood, to obtain a classification
that in turn yields a map or area statistics (Lawrence and
Wright
.
The impression is often presented that this approach is to
a large degree an automated process: but it is generally not
so, at least if the goal is good, or even reasonable, thematic
accuracy. The supervised extraction of spectral signatures
typically requires the analyst to identify multiple areas of
interest (AOI) or polygons of each class to be mapped for
the software to calculate spectral signatures. While at times
these AOIs are based upon field visits or other sources, they
are often identified by visual interpretation by the analyst.
That interpretation may be directly from the imagery being
studied or through the use of finer spatial resolution prod-
ucts that serve as the calibration data. The other approach
to signature extraction is unsupervised or clustering. In this
approach, the image data set is divided into pixels of similar
spectral responses or signatures. These spectral signatures
are undefined and the analyst must label each cluster or
signature with a class name or designation. This is often
done by the analyst displaying the pixels for a cluster in a
distinct color in the image and then defining the class by
visual interpretation.
All maps should have an accuracy assessment. That as-
sessment should include locational accuracy and thematic
accuracy. There are multiple methods to determine thematic
accuracy and present the results but in remote sensing in
essence it requires comparing a sample of the classified
results to known conditions called validation (Congalton
and Green 1999). Validation or “truth” data can come from
multiple sources or procedures including field work but, as
in signature extraction, it is often by visual interpretation of
the subject imagery.
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
December 2017
801