in other words, still tethered to expensive custom hardware to
make point cloud-based products work for an emerging user
base. By 1999, a second wave of software came about due to
the growing use of computers in the workplace. It was cen-
tered on client server-based workflows (Frei
et al.
2005). Soft-
ware like Cyclone from Leica Geosystems, developed to be the
successor to
CGP
by Cyra Technologies, is one such example
from this period.
It was only in the 2000s that computer graphics became
more affordable, because of companies like
AMD
and Nvidia.
A substantial price and performance gap had existed between
general and high-performance computing prior to this. For
example, manufacturers like
IBM
viewed personal computing
as the preserve of scientific or business machine applications
(DeCuir and Nicholson 2016). The functionality and screen
displays of personal computers were generally geared toward
productivity software like spreadsheets and word-processing
packages; they were not primarily made to be used for sophis-
ticated or visually complex graphics. Consumer applications
that pushed those boundaries had their roots in entertainment
or creative computing activities, such as video games and
audio video production.
The Point Cloud
In the 1980s, the point cloud was revisited as an alternative
to surface modeling, which had become a technique com-
monly used in computer graphics-based disciplines (Levoy and
Whitted 1985; Levoy 2007). Earlier work on “virtual images”
in the field of holography, however, had demonstrated the
potential for documenting and presenting a scene in 3D using
light-based techniques. The train examples generated by Leith
and Upatnieks (1964), one of which is shown in Figure 1a,
are a famous instance of this earlier work.
CAD
users started to
realize that solid models made drafting and design processes
more efficient—they could present actual surface conditions
as opposed to working from plan drawings (Levoy and Whit-
ted 1985; Besl and McKay 1992; Kacyra
et al.
1997; Addison
and Gaiani 2000; Dekeyser et al 2003). At the same time, the
potential to use personal computers for graphic-intensive tasks
was also starting to emerge. For example, Amiga computers
equipped with the Video Toaster from NewTek laid the founda-
tion for reasonably priced ray-tracing software like LightWave
3D (DeCuir and Nicholson 2016). There was even a Scannerless
Range Imager developed at Sandia National Laboratories to run
on the Amiga (Sackos
et al.
1998). Digital terrain modeling and
height mapping had also become more accessible to a general
audience. Design packages like Alias on
SGI
machines and
fractal landscape generators like Scene Generator were bridging
a gap that would later be filled by point clouds (Jaenisch
et al.
1994; Campbell 1997).
Point clouds provided a template from which scenes and ob-
jects could be modeled. From these point clouds, the “as-built”
or “as-is” conditions of a scene could be used as the baseline
data for survey-driven projects. This approach was fundamen-
tal to the
CH
-based workflows used by the Coignard family (fa-
ther and two sons) and
EDF
engineers like Guillaume Thibault.
This early "as-built" based work–as applied to artifacts and
cultural heritage sites–is explored in more detail in part two
(Bommelaer and Albouy 1997; Coignard 1999; Thibault and
Martinez 2007). Like the pixels of a photograph, which have
an x- and a y-coordinate, a point cloud is used to document
an environment with coordinates, except in three dimensions
(Levoy and Whitted 1985; Levoy 2007). The z-coordinates give
depth to the scene collected. Other information, like height-
field data and surface reflectance, can also be retrieved (Nitzan,
Brain and Duda 1977; Y. Chen and Medioni 1992). One of the
key outcomes of the relationship between Mensi and
EDF
that
remains in use is CloudCompare – as discussed in more detail
in Figure 7 (Daniel Girardeau-Montaut, email to author, April
15, 2012). This
EDF
-funded point-cloud software became open
source in 2009 (Girardeau-Montaut, n.d).
Summary
Part one of this article ends at a point where midrange TLS
information starts to be collected to known accuracies, repeat-
ability of results and standards of resolution necessary for “as-
built” documentation. Otherwise known as a theoretical cam-
era model in 3D imaging, these types of considerations enable
an object or scene to be documented within millimeters and
centimeters over a known distance. Examples for repeatability
of results within the context of midrange TLS include a) the
ability to rescan an object or site with the same instrument
and workflow to compare to previous results or baseline and
b) features of the system architecture of the instrument, such
as the number of times the signal of points returning to the
scanner can be sampled to help improve data quality.
Providing an historical perspective for development helps
the potential user of midrange TLS to better understand it
in a number of ways. First, it helps provide context to what
might otherwise be seen as a black box technology - where the
knowledge economy mainly resides with the manufacturers
of the technology. Second, it helps to better contextualize how
and why relative navigation remains at the core of midrange
TLS
technologies - because of early space and defense ap-
plications identified through agencies like
NASA
and funded
by
DARPA
. Third, survey based solutions used today are part
of a lineage that can be traced back to vehicle based solu-
tions. In other words, the technology made the transition to
tripod based applications. It did not start out as a tripod based
survey method. Fourth, the development of midrange
TLS
was closely aligned with and reactionary to broader develop-
ments in personal computing and microelectronics at time of
publication. Fifth, a thread running throughout the history of
midrange
TLS
is the need for applied research and develop-
ment based on detailed information retrieval in environments
otherwise hazardous to humans. For example, it is a common
point of reference between the foundation companies for
commercial use - discussed in more detail in part two.
Acknowledgements
The author would like to thank the many leaders in their field
who enthusiastically waited for these articles to make it to
publication. This includes core people from Cyra Technolo-
gies, K
2
T, Mensi and Reigl. Both articles pay homage to them
and many other pioneers in 3D imaging.
The articles are dedicated to Rebecca, Fonzie, Harvey,
Poppy and Willow. This article was made open access by
Remotely Interested
LLC
.
References
Addison, A. C. and M. Gaiani. 2000. Virtualized architectural heritage:
New tools and techniques.
IEEE MultiMedia
7 (2):26–31.
Akeley, K. 1993. RealityEngine graphics. Pages 109–116 in
SIGGRAPH ’93: Proceedings of the 20th Annual Conference
on Computer Graphics and Interactive Techniques
, held in
Anaheim, Calif., 2–6 August 1993. Edited by M. C. Whitton. New
York: Association for Computing Machinery.
Amann, M.-C., T. Bosch, M. Lescure, R. Myllylae and M. Rioux. 2001.
Laser ranging: A critical review of usual techniques for distance
measurement.
Optical Engineering
40 (1):10–19.
Besl, P. J. 1988. Active, optical range imaging sensors.
Machine Vision
and Applications
1 (2):127–152.
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
July 2020
427