at risk if a newly damaged bridge fails before its condition can
be determined because manual inspection was not properly
prioritized. For this reason, techniques that utilize bi-tempo-
ral images to detect changes in the scene will be more valu-
able if they are able to achieve similar levels of accuracy as
the single-date methods.
The use of bi-temporal (i.e., pre- and post-hazard) images
proved to be more effective in the detection and delineation
of fine-scale crack damage to critical infrastructure than sin-
gle-date approaches. The most effective of the change detec-
tion methods tested was the raster processing model. Figure
7 shows a section of an ABQ classification product at various
processing stages, providing an example of how the image
difference model detects changes in the scene and systemati-
cally eliminates changes not of interest, attempting to leave
only new cracks highlighted in the final product. Figure 8
shows the initial input images and
final product, giving an indication
of its potential utility. The raster
processing model was robust across
multiple scenes and surface mate-
rial types, can be applied in near
real-time and generated products
with a combination of accurate
crack classification and minimal
false detection. One version of the
model, employing a single set of
parameters and thresholds was
successful at detecting new cracks
in all study scenes. Conversely, for
the object-based methods, different
ruleset versions were required for
the successful classification of each
scene to compensate for different
surface material types. In addition
to positive accuracy results, the
ability of bi-temporal approaches
to isolate new cracks should enable
image analysts to effectively assist
first responders by providing ex-
tremely valuable information about
when observed damage occurred.
Inconsistent results between
scenes were encountered when de-
termining the most robust method
of detecting and delineating cracks
on road and bridge surfaces. The
scene with both the highest pro-
ducer’s and user’s accuracy, Lake
Murray 1 (
LM1
), exhibited far less
scene complexity and shadow ef-
fects than the Albuquerque (
ABQ
)
site. In the
ABQ
scene, there were
many more cars, shadows, road
markings, and lane dividers, all
of which contributed to the high
number of false detections. The
target crack features in the
ABQ
scene were also much smaller than
the crack features in the
LM1
scene.
While the width in pixels of the
smallest target crack feature in the
ABQ
scene is the same as the small-
est crack feature in the
LM1
scene,
the simulated crack targets in the
LM1
scene have a relatively greater
length. The
ABQ
scene represents
the smallest crack feature that can
be detected using the methods tested while also minimizing
false detection, where the
LM1
scene depicts a target size more
indicative of real world damage that could affect the safety of
an actual piece of critical infrastructure. This difference rep-
resents the two extremes of target size that were investigated
in this study. If only cracks of a certain size are of interest,
parameters related to the small and large object elimination
tools could be adjusted to further refine classification results
based on minimum and maximum size criterion.
As shown in Table 4, the method with the highest compos-
ite of producer’s and user’s accuracy is the raster processing
model created in
ERDAS
Imagine using image differencing
combined with a sequence of kernel based spatial filtering
techniques. The 69.1 percent user’s accuracy of the raster
change detection model is significantly higher than any of
the other methods tested. With considerably fewer false
Figure 7. Enlarged section of an
ABQ
scene classification generated with the raster
processing model at various stages of processing. Time 1 (a), time 2 (b), all thresholded
changes (c), negative changes only (d), minimum size filter applied (e), and large object
elimination filter applied (f). Red = Detected crack, Blue = Preexisting crack, Green =
New crack.
84
February 2018
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING