PE&RS July 2015 - page 576

extent (e.g., along international borders). The tiling system
consists of 98 non-overlapping tiles, reduced from 445 indi-
vidual scene frames processed in previous versions.
Composite Target Dates
The
MIICA
methods used in the
LANDFIRE
RSLC
process utilize
imagery from two time periods per year: early growing season
and late growing season. Pixels from scenes acquired from
day-of-year 100 to day 199, centered on day 175 make up the
first date; pixels from scenes acquired from day 200 to day
299, centered on day 250 make up the second date. If every
scene were acquired and usable, there would be approximate-
ly six scenes available for each path/row for each date period,
with additional data available in portions of the scene where
there is side overlap. Due to acquisition schedules and cloud
cover, fewer useable observations are typically available.
Scene Selection
An automated scene selection process was developed, utiliz-
ing the U.S. Geological Survey (
USGS
) Landsat Bulk Metadata
Service
(
.
USGS
.gov/metadatalist.php
). These re-
cords are updated daily, and contain the most recent metadata
on all available scenes in the
USGS
Landsat archive. The meta-
data records are parsed to include only scenes within:
CONUS
,
the years of interest, and the desired date range. From that
reduced list, all scenes with less than 80 percent cloud cover
based on the metadata and that intersect a given tile extent
are selected for further processing. Two lists are generated
from this process: a list of scene identifiers for use in scene
ordering, and another more detailed list that includes scene
ID
, percent cloud cover, the nominal image corners, and a link
to the online browse image for visual reference through the
USGS
Earth Explorer (
.
USGS
.gov
) system.
The longer lists also contain tile framing information for the
re-projection process that follows.
Image Stack Processing
The
USGS
Earth Resources Observation and Science (
EROS
)
Center Science Processing Architecture (
ESPA
;
USGS
, 2014) is
a processing system/service designed to assist an end user
in performing common image transformations beyond that
provided by the
USGS
Level-1 Product Generation System
(
LPGS
) generated Level 1T standard products. Current ser-
vices are available for
TM
and
ETM
+ imagery and include
TOA
Reflectance conversions,
SR
computations using the
LEDAPS
algorithm, cloud mask generation, and image re-projection
to a user specified projection and output extent. Given a text
list of Landsat scene
ID
s, projection parameters, and output
image extent coordinates, the
ESPA
system orders and retrieves
data from the
LPGS
and applies
SR
corrections, produces data
masks, and computes projection transformations in a parallel
processing environment, allowing several tiles to be processed
concurrently. The extent of each tile is specified to the
ESPA
system at order time, and each scene is processed to
SR
, re-
projected, and clipped to the tile extent. When complete, an
automated script is used to parse the
ESPA
output webpage and
download the processed imagery for composite generation.
In the absence of an operational
SR
algorithm for
OLI
data,
scenes are converted to
TOA
reflectance as described above. In
short, this process involves applying the reflectance rescaling
coefficients given in the scene metadata and correcting for
sun-angle by dividing the rescaled pixel values by the cosine
of the scene-center solar zenith angle. The resultant images
are then re-projected to tile-based coordinates using the same
projection parameters and image extents used in the
ESPA
processing. At this point, each image (
TM
, E
TM
+, and
OLI
) is
stored in a multi-band file in 16-bit integer format with reflec-
tance values mapped from 0 to 10,000.
Data Masking
The
LEDAPS
system generates cloud, cloud shadow, and other
masks as part of the process of generating surface reflectance
products. These masks are utilized in the
LANDFIRE
RSLC
pro-
cesses to identify pixels to be omitted from further process-
ing. An extent mask is also used to identify non-mapped areas
(beyond international borders and large water bodies).
LEDAPS
mask layers used in the masking process include the cloud
mask, adjacent cloud mask, cloud shadow mask, water mask,
snow/ice mask, and the dark-dense-vegetation (DDV) mask.
The process of mask determination is generally suitable for
RSLC
processing, though systematic errors do exist. Terrain
shadows are often identified as water, so a percent-slope
image derived from the National Elevation Dataset (
http://
.
USGS
.gov/
) is used to remove any water mask pixel
with a slope of greater than 2 percent. There is often confu-
sion between cloud shadow and water and between cloud
and snow/ice, but since all of these classes are masked, these
errors are deemed permissible. One exception is that snow/
ice pixels that have been misidentified as cloud will have
associated shadows cast by the
LEDAPS
algorithm. These, and
potentially other, erroneous shadows are removed by using a
“dark-pixel” threshold; only those shadow pixels with reflec-
tance values below the threshold remain as shadow. Single-
pixels in the water and
DDV
masks are removed using a mov-
ing window filter and then the adjacent cloud, cloud shadow,
cloud, and
DDV
masks are all passed through a final filter to
fill in small gaps between clouds, adjacent clouds, and cloud
shadows, unless the pixels are within the
DDV
mask.
In addition to the revised
LEDAPS
masks, the ragged scan
edges in
TM
images due to bumper mode (Storey and Choate
2004; visible in Figure 1) are trimmed using nominal image
corners from the metadata. The ragged edges are caused by
the offset of the detectors in the focal plane and occur on the
edge of valid image data. The multispectral information in
TM
and
ETM
+ images is also checked for image fill conditions.
If any band of a multispectral pixel is filled, the entire pixel
vector is masked.
Since the
LEDAPS
system currently does not process
OLI
data, the
LEDAPS
-derived masks are not available. While the
OLI QA
band includes flags for cloud, cirrus cloud, snow/ice,
and water, the quality of these flags varies by product and
scene. Combining the cloud and cirrus cloud flags results in a
reasonable cloud mask in the majority of cases, though it was
found to be insufficient in several test scenes (Plate 1).
In addition, the snow/ice and water masks were found
to be unrealistic in the majority of test scenes. Therefore,
enhanced data masks for
OLI
were created using a combina-
tion of existing and modifications of existing algorithms.
For each
OLI
scene, pixels that meet the fill condition in any
band are considered fill in all bands, as in
TM
and E
TM
+ data.
Next, spectral tests for snow/ice and water are conducted us-
ing the algorithm proposed by Luo
et al.
(2008) for use with
Moderate Resolution Imaging Spectroradiometer data and the
thresholds proposed by Oreopoulos
et al.
(2011) to adapt the
algorithm to Landsat
TOA
reflectance data. The tests used for
detecting snow/ice and water are shown in Equations 1 and 2,
respectively.
Snow/Ice = B4 > 0.24 && B6 < 0.16 && B4 > B5;
where B4, B5, and B6 are un-scaled
OLI
reflectance
(1)
values for bands 4, 5, and 6, respectively.
Water = (B4 > B5 && B4 > B6 * 0.67 && B2 < 0.30 and B4
< 0.20) or (B4 > B5 * 0.80 && B4 > B6*0.67 && B4 < 0.06); (2)
where B2, B4, B5, and B6 are un-scaled
OLI
reflectance
values for bands 2, 4, 5, and 6, respectively.
576
July 2015
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
515...,566,567,568,569,570,571,572,573,574,575 577,578,579,580,581,582,583,584,585,586,...602
Powered by FlippingBook