PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
July 2015
535
In the near future, the photogrammetry industry will be able
to provide city and county managers the ability to visualize
their cityscapes using 3D views for varying types of analysis.
Pipeline managers will have the ability to see their above
ground pipelines traversing the countryside for active risk
assessment analysis. As some will attest, true ortho and
wire-framing have been in existence for over 15 years but
this involves manual to semi-automated interaction with
the stereo imagery. We have seen oblique imagery in use
for the last decade with trends to continue to be a very good
product for city and county governments along with emergency
management over the next decade. Software used for viewing
and measuring oblique imagery will continue to be the norm
in varying industries. Yet, our focus here is the refreshing
trend toward the display and measureable use of 3D urban
and infrastructure models using aerial mapping, which we are
witnessing as it makes its way into the spotlight.
The concept is to use aerial imagery (nadir and oblique)
as the input to producing real 3D textured surfaces. The
topic of using oblique and nadir imagery to produce real 3D
textures to interact with 3D mesh models has been in the
research community for more than a decade. The new trend as
this technology surfaces within the private photogrammetry
industry is for these 3D textured surfaces draped on a mesh
model to be produced as a by-product of a fully automated
mapping workflow. It is predicted these 3D real imagery models
of the landscape will be as common to client project deliveries
over the next five to ten years, as digital orthophotography had
become by the late 1990s.
In a recent discussion with Jeff Kirk, VP of Software for
Icaros (Fairfax, VA), I was provided insight into how aerial
photography is used to produce these real-imagery 3D scenes.
Oblique and nadir imagery are processed through the normal
photogrammetric workflow to produce aero-triangulated
imagery blocks. This same imagery set is then used to produce
accurate surface structure through dense correspondence
matching and triangulation, similar to dense DSMs, only
three dimensional. With this methodology, the end product is
a great view-navigation and visualization tool, and a product
from which measurements may be extracted by the end user.
The process goes something like this.
Corresponding image points are triangulated to 3D points
on the Earth’s surface (the same as for traditional DTMs).
A look at making real - imagery 3D scenes
(Texture Mapping wi th nadi r and obl ique aer ial imagery)
Points are then connected to make a surface. There are various
methods for doing this. One method is to cast the points
outward onto a huge sphere, where in the limit, the sphere
becomes a flat surface. Points are then connected together
on the flat surface, similar to a connect-the-dots games. The
points are then transformed back to their original positions on
the Earth’s surface, but now they are connected to one another.
This is called skinning the surface.
Now we must colorize the surface from the original images.
Texture maps are computed for the surface model as shown
in figure 1 mapping images to it. A simplified description
of the process is that as the algorithm parses through each
Figure 1. Texture map produced from nadir/oblique imagery of
Frederick, MD by Icaros, Inc. The texture is an index with the
larger images on the left and smaller matching images on the right
Photogrammetric Engineering & Remote Sensing
Vol. 81, No. 7, July 2015, pp. 535–536.
0099-1112/15/535–442
© 2015 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.81.7.535