Report
Prepared by
ASPRS – The Imaging and Geospatial
Information Society
Sponsored by
U.S. Geological Survey
January 2000
ASPRS Camera Calibration Panel
Report
Preface
The ASPRS Camera Calibration Panel was formed in response to a request submitted to the ASPRS in September 1998 by K. Eric Anderson, Chief of the Mapping Applications Center, USGS. The charge to the panel, provided below, summarizes the motivations behind the request. Panel members, drawn from government, academia, and industry, were identified in the fall of 1998. An initial progress report was presented in May 1999 at the ASPRS Annual Conference, as part of two sessions on camera calibration issues. Both sessions were well attended, and the participant’s comments clearly confirmed the importance of calibration to the community. A draft version of this report was posted for review and comment on the ASPRS website in the fall of 1999. In addition to comment on the website, individual panel members and the USGS have received direct correspondence concerning current and future calibration issues. Of particular note is a letter from the Management Association for Private Photogrammetric Surveyors (MAPPS). Portions of this letter are quoted in the recommendation section of this report.
The report discusses the purpose of geometric calibration and the need for standards and specifications. Technology issues including imaging technology developments, acceptance of those developments by the community, and trends in support systems are discussed. General calibration methods, current practices in the USA and the international community, and a discussion of calibration infrastructure options are also included. The report concludes with our recommendations and suggestions for future research.
Grateful acknowledgment is due each of the individual panel members for their enthusiastic participation in the process and their contributions to this report: Thomas Behan, New York State Department of Transporation; Steve DeLoach, EarthData Technologies; Geoff Gabbott, U. S. Department of Agriculture; Bradish F. Johnson, U. S. Geological Survey; Franz Leberl, Vexcel Image Information Engineering; George Lee, U. S. Geological Survey; Donald Light, Litton-Emerge; Dean Merchant, Ohio State University; Roger Pacey, LH Systems; Charles Toth, Center for Mapping, Ohio State University.
Panel Chair
Charge of Panel
The Camera Calibration Review Panel serves as an independent advisory group to the U.S. Geological Survey (USGS). Convened with the assistance of ASPRS - The Imaging and Geospatial Information Society, the Panel will review a wide range of issues related to sensor calibration facing the geospatial community in the future. In addition to validating existing requirements for aerial mapping camera calibration in the United States, the Panel will assess future technical developments related to sensor calibration and identify operational strategies to address the long term viability of these functions in support of the broad user community.
Since the USGS Optical Science Laboratory (OSL) currently represents a critical resource for the Nation, any recommended changes to the capabilities or operations of the OSL must ensure continued delivery to the user community of a high quality and level of service of sensor calibrations.
Questions
The Panel will address the following general questions and is free to augment or segment the questions as appropriate:
1. What is the future of geometric sensors, both airborne and spaceborne? What calibration requirements will exist in the future to support users of those sensors? Is there a technical gap between those requirements and the sensor calibration capabilities that currently exist or are planned for deployment?
2. If there is a continuing need for camera (sensor) calibration beyond current camera calibration capabilities, how should these services be delivered to the community? Who will be responsible for the calibration of the various different sensors? Should future sensor calibration capabilities focus on analog and/or digital sensors? Should laboratory and/or in situ calibration capabilities be supported?
3. Depending on recommended future methods and processes for sensor calibrations, what viable options exist to ensure adequate funding and staffing to support the recommended transition(s) for calibration services?
4. What is the best long-term operational strategy for continuation of calibration services? Is the calibration mission best supported by a public (e.g., Federal) or private organization, or combination of organizations? Where does the USGS OSL fit in the future plans for sensor calibrations?
5. How does the U.S. sensor calibration capability relate to those of other countries? Are there International sensor-calibration standards issues that would benefit from further study? What is, and/or should be, the relationship of U.S. sensor calibration capabilities to International capabilities and requirements? Should sensor calibration standards be global in nature?
Purpose of Geometric Calibration
Calibration is a refined form of measurement conducted to assign numbers that represent relationships among particular properties of a measurement system. The necessary accuracy of this procedure depends on requirements of the measurement system. If the camera is a component of the system, procedures used for calibration are determined by the camera type, by other components of the measurement system, and by the final intended accuracy to be produced by the calibrated measurement system. Without care in calibration of the camera and its associated system components, no rational prediction of system performance can be made.
If calibration results are to be used for photogrammetry, the calibration procedure should produce numerical values representing spatial relationships of the measurement system. These include numerical estimates of camera interior orientation represented by focal length, location of the principal point, and coefficients of appropriate models representing lens distortion. If the measurement system includes additional sensors, their relative spatial position and orientation properties must also be determined by calibration. The calibration procedure should include environmental influences if the full spatial accuracy capability of the system is to be realized. Calibration of the photogrammetric measurement system produces numerical estimates of system properties and estimates of reliability of the calibration results.
Need for Standards and Specifications
The need for camera calibration has been fundamental requirement since the inception of photogrammetry. As the profession evolves and the technology advances, the calibration task becomes more complex. Clearly, the quality of camera systems has improved significantly over the years and one might argue that while camera calibration may appear to be less important the impact of small errors can be significant. However, the development of new sensors and the integration of these sensors with other systems make the need for calibration as important as before, just different and probably harder to perform.
Traditional camera calibration can also be viewed as a quality assurance process which ensures the metric and image quality of the primary data source for photogrammetric work. The need for standards and specifications seems obvious for all subsequent mapping products too. Standards and specifications ensure a quality product that meets end-user expectations. Standards also provide traceability. Traceability generally requires organizations to verify that the measurements they make are "traceable" and to have records that their own measuring equipment had been calibrated by laboratories or testing facilities whose measurements are part of an "unbroken chain." A calibration certification provides the photogrammetrist with one such form of documentation. Finally, camera and sensor calibration standards provide the scientific community, who use remotely sensed data in their investigations, with confidence that research findings are consistent, repeatable, and comparable.
The need for standards and specifications for all mapping products produced and used by the mapping community is intertwined with the general need for quality standards. The current standards activities in the mapping community are widespread and involve many players. As one begins to search for existing standards, one quickly finds a web-like environment with many related activities, by many organizations. This is an indication of the complexity of developing standards in today’s world. The standards process and the resulting standards and specification are essential to the photogrammetric profession. The standards development and approval process is long and laborious, but not by design. The process ensures proper review and acceptance by the entire community affected by the standard. The development of a national and international standard for sensor calibration will be a long and tedious process, but an essential one to ensure the integrity of the products and services that the profession provides.
The photogrammetric and remote sensing community needs to define the standard for camera and sensor calibration using the expertise and participants from all sectors of the mapping community. This includes government agencies, equipment manufacturers, practitioners, standards organizations, academia, and professional societies. As the current review and recommendation process evolves into a proposal for a calibration standard, endorsed by the ASPRS, it needs to go through the standards approval process to become a national standard. Working in parallel with the ISPRS, the U.S. standard needs to conform or evolve into an international standard. With ISPRS endorsement, a proposal for an international standard will be developed and go through the ISO standards approval process.
The ISPRS 1996 Congress resolutions relating to calibration are as follows:
Resolution I.2 Image Quality Assessment. The Congress recommends: 1) that research work on methods for assessment imaging as well as image quality continues, for photographic as well as for digital or digitized images; 2) that efforts continue to revise or establish recommended procedures for routine calibration and testing of photographic lenses, cameras, films and detectors, based upon existing international standards; 3) that efforts continue to establish internationally accepted specifications for aerial photography, based upon international quality management standards.
Resolution I.3 Standards. The Congress recommends: 1) that ISPRS becomes more actively involved in ISO activities of interest; and 2) that ISPRS determines the value of international quality management and quality control for the photogrammetric and remote sensing community.
Resolution III.1 Integrated Sensor Calibration and Orientation. The Congress recommends that related activities continue during the period 1996-2000, in co-operation with Commission I and the sister IUSM organizations, with emphasis on multi-sensor system calibration, integration of GPS, INS and image processing techniques, and automatic integrated orientation concepts.
These resolutions are relevant to camera and sensor calibration and, thus, the work of the ISPRS Commissions should be closely followed and increased participation by the U.S. would be desirable. According to Dr. Manfred Schroeder, Chairman of ISPRS Commission I, Working Group 1 (Sensor Parameter Standardization and Calibration), the emphasis of the working group has been on "sensor calibration" rather than "standardization." The work thus far has focused on the calibration of CCD-line-scanner imagery rather than digital images of two-dimensional CCD-arrays. Perhaps, further progress can be made with endorsement by the ASPRS and participation by the U.S. community.
Technology Issues
Analog optical sensors
The general impression in the photogrammetric profession might be that everything is going digital, therefore why not aerial cameras. The truth is that the aerial camera manufacturers foresee that analog optical lens aerial camera systems will be requested, manufactured and sold for some years to come, 10 – 15 years even. This is mostly because, even with the fast pace of technology advancement, it will be 10 – 15 years before airborne digital sensors will be able to compete in terms of image quality and efficiency.
The challenge to the digital acquisition systems will be to produce the same amount and quality of imagery as economically as that of the analog optical lens aerial camera. In addition to this is the fact that an aerial camera system purchased today will have, depending on its care and usage, at the least an average life expectancy of 20 – 30 years. The two major manufacturers of most of the aerial cameras today will probably not introduce any major new products in the optical analog type. These current products have already reached such a high state of maturity that there is really nothing to improve upon. They are state of the art and little more can be extracted from them technically.
However, two of the aerial camera manufacturers are working on developing airborne digital sensors. LH Systems is already flight-testing a prototype of its airborne digital sensor and plans to introduce a production model of more advanced design at the ISPRS congress in Amsterdam, July 2000. Z/I Imaging has also announced their plans for a prototype airborne digital camera to be presented at ISPRS 2000. These digital sensors will initially complement their optical analog products but with an eye to the future of one day being able to replace them. Therefore, it would appear that the need for calibration of analog optical lens aerial cameras is still current and foreseen for possibly the next 30 – 45 years.
Digital optical sensors
For decades, film cameras have been the workhorses around the world for both mapping and reconnaissance photography of the earth. More specifically, the film mapping cameras with 9 x 9 inch format film and a focal length of 6 inches have enjoyed a dominant position in the airborne mapping and remote sensing business. Now digital camera technology is also finding its way into the airborne imaging market place.
In 1972, Landsat orbited an electro-optical camera system that replaced the film with electro-optical components and transmitted its image data via electronic data links to a ground receiving station. Digital computers were then employed to reconstruct the bit stream into an image for viewing on a monitor or printed out on film. The SPOT satellite and many others followed Landsat, so the interest in digital technology for cameras found its place along with film cameras as a means to image the earth. The digital technology trend that began with space systems is now available for airborne imaging.
Electro-optical camera systems are markedly different from film cameras, yet they have similar components. All cameras are composed of 4 components; 1) optics, 2) detector,
3) processor, and 4) output media. The following table shows the basic typical differences in film and electro-optical (digital) cameras, and the similarity of the two technologies.
Components | Film Cameras | Digital Cameras |
1. Optics | Lenses and Mirrors | Lenses and Mirrors |
2. Detectors | Film | Solid State Detectors (Charge Coupled Devices, etc.) |
3. Processors | Chemistry | Digital Computers |
4. Output Media | Film | Computer Readable Disks, Tape and Monitors |
There are advantages and disadvantages to both camera types, depending on the application. The basic difference is that film and film processing are replaced by solid state electronics such as charge coupled devices (CCD’s) called arrays with thousands of tiny detectors called picture elements (pixels). The digital camera calls for advances in computer technology to process the image data faster and store it on a retrievable large storage system rather than on film. Because film is a very dense storage media, has a long shelf life, and is in use the world over, the transition to all digital imaging systems is expected to take another 10 to 15 years or perhaps longer. It is difficult to know because both types of cameras have a place in airborne imaging at this point in time.
It is recognized that film cameras represent a mature technology and digital cameras are in their initial stages. Although the development of digital cameras is moving at the speed of computer technology, it remains true that today’s film mapping cameras cover a much larger size format than today’s typical digital camera. A modern film mapping camera made by Z-I Imaging, Inc. or LH Systems will deliver 9 x 9 inch film with approximately 40 line pairs / mm resolution to the user. This is equivalent to 20,800 x 20,800 pixels with each pixel being 11 mm in size yielding 432 megapixels per frame. Today’s largest digital cameras have up to 9000 x 9000 array and these are still in a developmental stage. Typical operational digital cameras such as those flown by Litton Emerge are 3000 x 2000 pixels in the array. Litton Emerge uses the Kodak DCS 460 that has a pixel size of 9 microns. While the resolution of some digital cameras is equal to film cameras, the dynamic range, yielding about 4096 shades of gray, is superior to the180 shades of gray typical of film systems.
Camera types
There are three types of film cameras: Panoramic, Strip Cameras, and Frame Cameras. Each of the three has a unique place depending on the application. Digital cameras are of the same basic design configuration except they are generally called by a different name and solid state detectors replace the film plane. The table below compares the film type camera versus the digital type. Note the additional digital type that has three linear arrays, one in the middle and one on each edge, usually within a camera frame.
Film Camera | Digital Name |
Panoramic | Scanner |
Strip | Push Broom |
Frame | Staring Array |
--------- | 3-Line Concept |
There are two types of arrays. Linear arrays 1 x 10,200 pixels are commercially available and these arrays can be butted together to form a much longer array to provide a wide swath. Rectangular matrix arrays, often called staring arrays are utilized in frame cameras. The frame camera with the CCD matrix array in the focal plane is the most likely successor to the film frame camera. It essentially has the same attributes inherent in the film mapping cameras except it employs the CCD matrix array instead of film. Frame cameras can be calibrated. They collect one homogenous unit (frame) at the instant of exposure. The result is one X, Y, Z-position and attitude for each frame. The scanner, push broom, and 3-line concept require precise position and attitude for each line, which also depend on precise time. Since the linear arrays can be butted together to form 20 thousand or more pixels, it is anticipated that the 3-line concept may become popular in the near term. Then as the CCD technology grows larger, and matrix arrays grow toward 20,000 x 20,000 pixels, digital technology will be fully competitive with film mapping cameras.
Today’s rectangular array sizes in frame cameras are typically (1k x 1k), (2k x 2k), (3k x 2k), (4k x 4k), (4k x 7k), (7k x 9k) and (9k x 9k). The sizes in most common use today are (3k x 2k) and (4k x 4k). The 3k x 2k frame cameras currently in operational use are capable of both natural color and color infrared depending on the filter employed.
The high technology computer era has truly brought on a transition toward the digital camera technology. For some years to come both film and digital cameras will be deployed by the user community. The US Geological Survey has the mission to calibrate aerial cameras for U.S. Government use. For several decades they have provided this calibration service to both the government and private sector for film cameras. The USGS should continue in this role, and it should fund the design, construction and implementation of a calibration instrument for digital cameras. In so doing, the USGS will posture itself and be prepared to calibrate the upcoming digital cameras as they take their place along with film cameras to handle the mapping and remote sensing needs of the country.
Satellite sensors
Photogrammetry and remote sensing using satellite technology has been evolving since the late 1950's and continues today. Technical advances continue at a remarkable rate in the United States as well as other countries. The initial satellite programs have provided for better understanding of the earth, the moon, the planets, and the solar system. During the evolution of satellite technology, sensor types and thus the ability to sense other spectral bands have also evolved. The trend, today, is clearly toward higher spectral and spatial resolution over a wider range of the electromagnetic spectrum; from analog to digital; from military intelligence applications to civil applications, both business and science; from interest in national sites to interest in worldwide sites; from spectral-type applications to more spatial-type applications; and from government-funded programs to private-commercial ventures.
One noteworthy trend has been the role change by the government and the private sector in satellite programs. While the government is still launching its own satellites (e.g., NASA’s Landsat 7), an increasing number of private companies are launching or plan to launch satellites for commercial purposes (e.g., Space Imaging’s IKONOS, Earthwatch’s QuickBird, and Orbital Sciences’ Orbview). Furthermore, these companies see their business to be more than providers of satellite images, but also providers of products and services. Even the types of end-users and government practices are changing. Government agencies are trending away from the model of being both data producer and end-user, and moving towards commercial procurement of data as an end-user. The initial satellite remote sensing projects tended to be demonstration projects in academia and government agencies. Today, the projects are more operational projects throughout the GIS community. The increased use of satellite data, the wide variety of data types and sources, as well as the technical expertise of a wide range of end-users, make calibration standards and methods essential to ensuring the quality of the source data and end products.
The range of sensor technologies and the number of different organizational entities, both governmental and private, involved in building and operating satellite data acquisition systems present a formidable challenge to the calibration task. Government agencies have undoubtedly formulated and executed extensive test plans to ensure that satellite subsystems comply with government specifications. Similarly, commercial companies have performed similar testing and evaluations, as do manufacturers of each system component, to ensure that the overall system performs to design specifications. However, the test plans and evaluation results have not been widely available nor published in the literature. Hence, common testing procedures have not been established and have not been widely accepted within the satellite community. Another factor affecting the development of common testing procedures is the rapidly advancing technology. With each new satellite, new capabilities are developed and new test procedures are needed. Geometric testing of satellite data generally implies in situ testing over a test range. While test sites for evaluating the accuracy of satellite data have been established, government agencies and private companies do not generally share them. Such test sites are very costly to establish and maintain.
Non-optical sensors
Recent years have seen a great diversification in the means by which topographic mapping data are produced. What at one point was an exclusive domain of aerial photography has now been expanded to a variety of other technologies, both concerning sensors as well as platforms carrying such sensors.
Typically, such technologies are represented as complex systems that include the platform such as aircraft or satellite, the motion compensation and even some elements of information extraction. In contrast to classical camera-based stereo photogrammetry, such tools need to be tested as entire systems rather than by components. Calibration of the individual component devices will be less feasible than the calibration of the full system by means of field tests.
Synthetic aperture RADAR and interferometric RADAR
The role of non-optical RADAR imaging technologies increases in Earth Observation. Many of the traditional mapping functions can now be accomplished using non-traditional techniques, in particular by means of Synthetic Aperture RADAR. ASPRS has recognized this development by dedicating an entire textbook to RADAR Remote Sensing (ASPRS, 1998). Five factors are playing a defining role in this development:
First is the geometric resolution. The geometric resolution of airborne and spaceborne RADAR sensors increases steadily. Until recently, RADAR images were available only at pixel sizes of 10 meters or so. These resolutions have improved. Airborne RADAR images are now available for unrestricted use with resolutions at 0.3 meters per pixel. We will soon see satellite sensors with pixel sizes at 3m and later at 1m. This increases the applicability of RADAR images to classical mapping tasks. Discoverer-II is a proposed satellite system of the US Department of Defense with 0.3m pixels.
Second is interferometry. More RADAR sensors are available, or are coming on-line, that support the interferometric analysis of image pairs to create some of the classical products of photogrammetry. This includes the production of accurate digital terrain models (DEM’s), at elevation accuracies of 1m from aircraft and at 3m from spacecraft images, and the support of automated topographic mapping at scales of 1: 25,000. This capability has been previously limited to aerial optical sensors.
Third are multiple-image RADAR sensors. As RADAR systems proliferate, they increasingly offer more than one frequency, and multiple polarizations, and as a result there is a "color RADAR sensor" now available, encoding the individual monochromatic images in various colors for visualization and visual interpretation.
Fourth is an increasing level of investment into sensor systems. The acceptance of RADAR imagery as a primary observation tool is a fact in the defense and intelligence communities. RADAR imagery is gaining acceptance with the civilian mapping agencies as well. Aircraft interferometric sensors proliferate, mostly in the defense research community, but also commercially, and many satellite missions are being prepared for launch during the coming 5 years. These will multiply today’s data streams and by necessity will stimulate the use of the data for routine and novel applications previously only available through photogrammetric technologies.
Fifth, GPS supports the creation of RADAR products at a topographic mapping quality. RADAR sensing is inherently a kinematic process that has not permitted RADAR mapping products to satisfy traditional topographic product standards, and as a result RADAR data were used to produce map substitutes where regular mapping was not possible. The advent of GPS positioning of the sensor, and the reduced sensitivity of RADAR system accuracy to errors in the platform attitude, have led to a revolutionary increase in the overall geometric accuracy of RADAR mapping products. It can be assumed that with such advances in synthetic aperture RADAR sensing, its topographic mapping products will become of increasing interest.
Photogrammetric mensuration with imaging RADAR and calibration technology
Interferometric and stereo-based DEM’s have been produced from both airborne and spaceborne RADAR systems for several years. These products did not meet, and were not held to the same accuracy standard as conventional mapping products. Yet, as the detail and accuracies of RADAR products increased, no standards have been developed for RADAR-derived mapping products. A need therefore exists to subject the novel, non-traditional RADAR sensors to the same geometric and radiometric calibration rigor that has long been the standard for optical imaging devices. Standards should be defined and methods to check them should be developed.
Laser Image Detection and Range (LIDAR)
An increased interest in urban terrain models with detailed geometries of buildings and trees has existed for some time in the telecommunications industry’s network planning for personal communications systems, and more recently for broadband wireless access systems. Simultaneously, Military Operations in Urban Terrain (MOUT) also have the need for detailed elevation databases in urban environments. Consequently, technologies have been developed to create detailed and accurate DEM’s of urban scenes with buildings. LIDAR collects DEM’s directly, not via an image processing and analysis procedure, carrying a sensor onboard an aircraft, and relying on GPS-positions for geometric accuracy. No rules and standards exist to ensure that such products satisfy certain accuracy criteria. Therefore, much like is the case with imaging RADAR, the end user and consumer of such data depends on his or her own devices to establish the accuracy and usefulness of such products.
In summary, what is appropriate for optical cameras must be appropriate for non-optical sensors if the resulting products are to be used for the same purposes and applications.
Acceptance of Technology
Transition time frame
Even with advances in new sensing technologies, i.e., more sensors are being placed in orbit and more digital source imagery is being produced, it appears that the level of aerial film use has not been adversely impacted.
Current use of new technologies by industry
The current level of use of satellite remote sensing data versus aerial photogrammetric sensors may be approximately 1: 4 on a global scale. RADAR versus optical sensing currently can only be assessed with the so-called value-added remote sensing industry, not separately in the photogrammetric stereo mapping industry. This comparison may suffer from a lack of RADAR image data when compared with the optical databases. Yet within the remote sensing segment, RADAR may today already have a 10% to 15% portion of the entire market. This will be somewhat affected by the advent of 1-m optical satellite data, and it could then swing back to RADAR once RADARsat-2 and LightSAR exist with 3 m and perhaps 1 m geometric resolutions.
Clearly, the use of certain image data in the industry depends on the availability of the data as well as the amount of funding received to create and promote certain sensing technologies and data. It is estimated that over one billion US$ have been or will be spent within the next 5-years to create new sensor systems and to place them in orbit. This compares to a current annual market of less than $100 million for optical equipment.
Trends in Support Systems
Conventional airborne surveying
One of the objectives of the uses of photogrammetry is to reduce the need for fieldwork (time- and resource-intensive ground survey), thus providing an efficient way to collect 3D data in large volumes. Until recently, analog film-based aerial surveying mapping has essentially been the exclusive technology for producing spatial data. The connection between the photo/image coordinates and the ground/mapping frame is provided by the orientation of the images, or georeferencing. The orientation parameters can be considered as interior and exterior orientations. The interior orientation parameters represent the imaging sensor internal behavior; these parameters are camera-specific and determined from the camera calibration process. The exterior orientation (EO) parameters represent the camera position and attitude at the moment the image is taken. The EO parameters are commonly described by the spatial coordinates of the perspective center and by three rotation angles known as w , j , and k . The most commonly used technique to determine the EO parameters is aerotriangulation (AT). Compared to the orientation of a stereo image pair, which requires a minimum of two control points and one elevation point, aerotriangulation allows a substantial reduction in the control point requirements for larger blocks.
GPS-based airborne surveying
With the introduction of GPS in the early nineties, it became feasible for the first time to directly measure the perspective center parameter of the EO. Using a differential kinematic GPS, the camera position coordinates can be determined at cm-level accuracy. This was a rather revolutionary development since there was no prior method available to independently measure perspective center coordinates at such a high accuracy. Aerotriangulation adjustment packages have been updated to incorporate GPS observations. The use of GPS further reduced the need for ground control; a widely accepted practice is to use only four control points at the corners of the block. Early Airborne-GPS experiences revealed a difference between the GPS-determined and photogrammetrically derived (AT) perspective center coordinates, especially on higher altitude flights. Investigations on this subject concluded that the primary reason for this phenomenon is in the environmental differences between laboratory and actual flight conditions. Cameras are calibrated in a controlled laboratory setting, which is usually quite different from the broadly changing in-flight conditions, including different temperature and pressure gradients in the lens cone. These effects suggest that in-flight or in situ camera (or system) calibration or refinement of the calibration parameters is needed. However, it is important to point out that because of the nature of the AT, the block adjustment will normally compensate for this discrepancy. By minimizing the differences on the ground, it can compensate for any systematic errors in the system. It can be assumed that the use of the GPS has substantially improved the performance of the aerial mapping process, but it has not yet eliminated or reduced the need for AT (including both major tasks, point mensuration and adjustment).
GPS/INS-supported airborne surveying
Inertial Navigation Systems (INS) can provide a complete navigation solution by simultaneously providing position and attitude data of the data acquisition platform. The sensor component, Inertial Measurement Unit (IMU) can deliver high accuracy acceleration and angular acceleration data at a very high sampling rate. The IMU data provides the position and attitude data and the GPS is used as an on the fly (OTF) calibration tool. Performance evaluations have confirmed that 4-7 cm positional and 10-20 arcsec attitude accuracy are achievable under the operational circumstances.
The use of integrated GPS/INS systems is mandatory for non-conventional (non-optical) imaging sensors. For example, LIDAR or RADAR systems need accurate platform motion data. Otherwise, the high-accuracy potential of the range data cannot be realized. These sensors work with rather high data transmission frequencies (10 kHz or higher), and typically, the platform positioning data represent the biggest term in the overall error budget. For electro-optical sensors, the use of GPS/INS-based orientation is not necessary, but it is advantages since it can largely eliminate the need for the most complex task of photogrammetry, aerotriangulation.
Digital cameras with their limited footprint present a problem in applications. The use of a 3" by 3" digital sensor, instead of 9" by 9" film, would result in nine times as many images. In performing a traditional AT-based project, the approximately one order increase in the number of point measurements is unacceptable. Having GPS/INS-based orientation data, however, can reduce expenses, making the number of images irrelevant. In this sense, GPS/INS systems will play a key role in the development of airborne digital cameras and their upcoming introduction to the airborne surveying market. For the line sensors, which offer larger ground coverage, the need for GPS/INS is even more important since otherwise, the reconstruction of the geometry for airborne platforms is almost impossible.
The use of the directly measured EO parameters (derived from GPS/INS) requires the knowledge of the transformation between GPS/INS and camera frames, known as boresight transformation. The two components of boresighting are the offset vector between the INS center and the camera perspective center and the rotation matrix from the INS body frame to the camera optical axis. The boresighting computation is relatively straightforward if two position/orientation solutions, one from GPS/INS and one from photogrammetry, are available. The critical component is the rotation component, since an angular inaccuracy, unlike an offset, is amplified by flying height and has a significant impact on the photogrammetric data production. The calibration parameters must be determined with the highest achievable accuracy and must remain constant for subsequent missions. In other words, no flex or rotation can occur between the INS and camera devices; the whole mount should be sufficiently rigid.
The impact of GPS/INS-based orientation on camera calibration
The use of direct (platform-based) orientation in aerial surveying, whether in combination with a traditional large-format aerial camera or an emerging digital camera, has a significant impact on the accuracy of the acquired spatial data. By eliminating the AT adjustment, there is no longer any built-in support to automatically compensate for systematic errors in the system. Any error in both the interior and exterior orientation will translate into positional error in the features extracted from the images. In the past, the difference between the AT-derived exterior orientation parameters and the actual physical parameters of the camera were never considered (they simply were not included in the adjustment). Therefore, the use of direct orientation puts more weight on the individual calibration of the sensors, as well as on the overall system calibration and continual quality control. Based on current expertise, the following calibration process can be suggested:
Calibration can be thought of as procedures intended to assess characteristics of components of a measurement system (component method) or as procedures intended to assess the complete operational system (system method). Component methods have been particularly useful for manufacturers of metric cameras for product quality control purposes. Users, on the other hand, although traditionally satisfied with results of component calibrations, are becoming interested in a system method to assure that full advantage is taken of contributions from additional metric sensors such as GPS and INS.
Component methods:
In the Western Hemisphere, the exposure, on to film, through the camera body of images from multi-collimators is used for calibration of standard mapping cameras. In Europe, an optical approach termed the goniometer method is used. In either case, the procedure is conducted within a laboratory under closely controlled environmental conditions. World wide, other methods of camera calibration are used at various facilities and under various circumstances of application. Examples are the methods of plumb lines, convergent orientation, and three-dimensional control field. With these methods, some form of functional constraint is usually employed. Depending on application, these methods may be considered either component or system approaches to calibration.
Multi-collimator method:
With this method, a series of optical collimators, focused at infinity and equipped with reticles containing a center cross and image resolution targets, are rigidly affixed to a supporting frame and oriented toward a common camera station. The center collimator establishes the directional origin to which banks of additional collimators are related at angular radial increments of about 7 ½ degrees along four banks. Depending on the facility there may be variations in the placement of the multicollimators.
The accurate angular relationship among collimator center crosses is measured by a theodolite When a camera is to be calibrated, a scope equipped with a center cross reticle is pointed at the center cross of the central collimator. When aligning the camera, this line of sight is established by auto-collimation, off the back surface of an optical flat resting on the registration frame of the camera.
At this point, a special flat photographic plate replaces the plate used for the camera orientation mentioned earlier, and a photographic exposure is made. After processing, the image coordinates of the collimator center crosses are measured on a comparator and compared to coordinates determined for corresponding images based on the known external relationships among collimators and the assumption of collinearity. Discrepancies between the measured and the known coordinate values represent the lens distortion of the camera. Note that the image of the central collimator is located at the principal point of auto-collimation, thereby producing the ability to directly observe the coordinates of the photo coordinate origin.
There are many variations in approach, but they all rely on the measurement of images of collimator center crosses and the comparison to their predicted positions based on known exterior angles and on the condition of collinearity.
Goniometer method:
The goniometer method does not include a photographic step. Instead of a photographic plate, a precision (accurate) grid is placed on the camera registration frame.
Before placing the camera in the calibration fixture, a line of sight is established between the collimation axes of a telescope, mounted on an accurately subdivided horizontal circle, and an auxiliary scope affixed to the calibration fixture. The camera is then placed in the fixture and oriented by autocollimation off the backside of the grid by a fixed telescope. This establishes the external normal to the camera registration frame as being the reference direction on the horizontal circle.
The camera is rotated about the optical axis to bring one diagonal of the grid to horizontal. The grid intersections along the diagonal are then sighted through the lens and measured with the main telescope mounted on the precise horizontal circle. When this set of observations is completed, the camera is rotated ninety degrees about its optical axis and the observations repeated on the new line of grid intersections.
The grid coordinates are compared to the computed coordinates for each observed intersection based on the observed angles and the assumption of collinearity. Note that since the reference direction was established as normal to the camera registration frame, the reference direction points to the principal point of autocollimation. Finally, the discrepancies between the known (grid) coordinates and the corresponding computed values are used as the measure of any distortion based on an assigned value of calibrated focal length. Variations in the goniometer method exist depending on the calibration facility.
Other methods:
Other methods of calibration of metric cameras have evolved as requirements and resources dictate. Some interesting methods have been described by Duane Brown [Brown, 1974] such as the stellar method, the string-line method, and the multi-camera convergent method. Depending on application, these may be considered either as component or system approaches to calibration. Yet another method uses a three-dimensional control field.
With the stellar method, the camera, equipped with a photographic plate, is pointed toward the star field. Multiple exposures of star images are made through a capping shutter. Timing of the shutter is recorded and operated repeatedly to provide a time code for star trace images.
Knowing the approximate position of the exposure station, and after corrections for refraction, the apparent right ascension and declination of the stars are used as known values. The measured images for each of the several epochs of exposure provide, when compared to corresponding star positions for that epoch, discrepancies representing distortion. Results are determined by the mathematical adjustment of all observations of images. In addition to parameters of interior orientation, the assumed location of the camera station is also corrected by adjustment. It is not unusual to have 200 star image observations on each plate. If the procedure is used for calibration of a camera intended for stellar measurements or satellite tracking, the method is extremely accurate. If used to calibrate a camera for airborne applications, the method is considered a component calibration.
The string line method requires a series of suspended strings or wires arranged so that they provide parallel objects. The departure from parallelism in the image provides the measure of lens distortion. By taking multiple images after rotating the camera about its optical axis, the imagery can be widely distributed across the field of the camera. By adjustment computation, the elements of interior orientation can be determined. This method is generally considered a component calibration for subsequent close range applications.
The multi-camera convergent method uses a series of photographs taken from at least two stations and at least one exposure made at the same camera station after a ninety-degree rotation about its optical axis. Targets arranged to fill the field of view of the camera are established and provide at least two, non-collinear object space rays through each target. It is interesting to note that no object space control is required for this method. In the subsequent adjustment, all elements of exterior orientation, all object space coordinates of targets, and all elements of interior orientation are treated as parameters in a common adjustment. This method is considered to be a close range method and may be a component or system approach depending on application.
Perhaps the most popular of the other methods is the photographing of a ground based three-dimensional control field. Targets of precisely determined known location are established in a manner that encompasses the object space of interest. This provides a sharp decoupling of the parameters of interior and exterior orientation. The method may be used for close range or terrestrial applications.
Systems approach
The notion of a systems [in situ] approach to calibration is best expressed in a paper by Churchill Eisenhart [1963] in which he presents a rational approach to calibration of measurement systems. The photogrammetric process represents a typical measurement system. Consequently, the procedure for development of a calibration method can closely follow Eisenhart’s guidelines. His method of system calibration can be described as consisting of two phases.
The first phase is the detailed description of the measurement system including hardware, software, measurement procedures, environment, and other factors describing the measurement system. This phase includes numerical estimates of all components where appropriate, and is termed the establishment of system specifications.
Given a measurement system specification (including ranges), it is necessary to assess system performance by comparison of system results to an independent, higher accuracy standard. Given sufficient data, a rational prediction of system measurement ability can be made. The testing of performance is an ongoing procedure during the life of the measurement system. This second phase is termed the establishment of a state of statistical control. The Eisenhart notion gives balance and credibility to the design of a calibration program for the photogrammetric system.
Close range / terrestrial applications
Following the notion of measurement system calibration, many non-aerial applications can efficiently use an in situ method of calibration. As an example in close range application, if the object to be measured photogrammetrically is bracketed three-dimensionally by targeted control, then a self-calibration approach can be used. In this instance, not only the derived parameters of interior and exterior orientation are carried, but weight-constrained coordinates of control (or their continuum) are carried in a common adjustment. These circumstances lend themselves well to the notion of system level calibration.
There are other application variations that may satisfy the in situ aspect of calibration.
Aerial applications
With the advent of GPS, the possibility of observing the exposure station coordinates for the aerial case has become practical. Earlier, the application of stellar camera positioning of the exposure station [Brown, 1969] for in situ calibration of a film based aerial camera demonstrated the accuracy potential which rose well above that of conventional aerial methods.
Internal system quality check (in situ)
With the advent of more sophisticated airborne digital sensors new possibilities arise for monitoring their status. One scenario would be for the geometric and/or radiometric calibration parameters, once determined, to be loaded into the airborne digital sensor system. Before flying a mission, an internal check would be run by exposing an image of known properties to it. The system then compares the observed image with its stored calibration data. If this internal check is within tolerance, then it is safe to proceed. If not, then the system would need either service or re-calibration.
Current calibration practices – USA
Background
The U.S. Geological Survey, in the Department of the Interior (DOI), operates the only active calibration facility for aerial mapping cameras in the United States. This activity resides in the Optical Science Laboratory (OSL) of the National Mapping Division’s Mapping Applications Center in Reston, Virginia. The calibration work performed in this laboratory is a critical part of the infrastructure of the photogrammetric industry and is considered "inherently governmental."
Historically, the United States has approached the determination of the lens and camera constants of aerial, mapping camera systems from an operational photographic method, using multicollimators rather than the goniometer (visual) method.
Product
The "Report of Calibration" issued by the OSL is a requirement in the contracting process of determining compliance with specifications for mapping data collected among all levels of government and in many levels of the private sector. The "Report of Calibration" serves a second and a greater role by providing quantitative camera system characteristics that are used to produce image and map products with a high degree of geometric accuracy and spatial resolution. The "Report of Calibration" presents the camera system parameters in terms that can be used in direct support of modern precision photogrammetric compilation applications. This differs from other calibration reports that provide results using a lens design point of view.
This process provides for the following:
The Future
Rapid changes in imaging technology have and are occurring in the last half of this decade. Digital aerial cameras are starting to be used to gather imagery but no standards or procedures for calibration have been established. The USGS is calibrating a few digital cameras on its multicollimator instrument to begin research into the design and requirements of a future, digital calibration instrument.
Current Calibration Practices – International
Very few countries in the world have any type of aerial, camera calibration capability. Canada has a calibration capability, which is located at the National Research Council (NRC) in Ottawa. The NRC was one of the other facilities, along with the USGS, that took delivery in the 1950’s of a multicollimator system made in the US. We understand that the NRC has stopped using this system and has replaced it with a different multicollimator that measures only a diagonal at a time. The NRC also has the capability of measuring the Optical Transfer Function (OTF) of a lens. The OTF is a consistent method of image quality evaluation for an optical system.
GSI in Japan has a calibrating capability using a Wild AKG Autocollimating Goniometer, although this instrument is really only suitable for cameras with older generations of lenses. The AKG is only capable of measuring the distortion across the lens field and not the fiducial coordinates. The fiducial coordinate values are obtained from the original calibration reports. We understand that Australia, China, South Africa, Sweden and the UK also obtained the Wild AKG. We are unaware that these instruments are in use in these countries. Camera calibration equipment also exists at the locations of the two aerial camera manufacturers, LH Systems and Z-I Imaging. We are aware that the aerial cameras sold throughout the rest of the world are either never calibrated again, or are returned to their respective manufacturers for service and re-calibration.
Calibration Infrastructure Options
There are several options for meeting the profession’s calibration needs. The infrastructure for calibration of sensors in the future is largely dependent upon effectiveness, funding resources, and acceptance. Each have advantages and disadvantages. Several options have been discussed by this panel and are briefly presented below. There may be other examples.
Federal agency responsibility
The federal government certainly has a need among civil agencies for camera and sensor calibration. Federal agencies also indirectly perform quality assurance of their data through the control and specification of the camera calibration process.
USGS-U.S. Geological Survey, the current model.
In 1973, the calibration services for aerial camera and lenses provided by the National Bureau of Standards, U.S. Department of Commerce, were transferred to USGS. The mission of the USGS Optical Science Laboratory (OSL), a national program, is to serve as the official calibration facility of aerial mapping cameras for the United States and its North American Free Trade Agreement partners.
THE OSL charges a fee to users for camera calibration and the report of calibration. Since the OSL also provides a basic quality assurance function to the basic mission of the USGS, not all operating costs of the OSL are recovered in the calibration fees. Maintenance costs and replacement capitalization fees are not charged.
NASA-National Aeronautics and Space Administration.
NASA’s procedure for calibrating its mapping cameras is to send the camera to the manufacturer for inspection and maintenance every three years. The camera is then transmitted to the USGS for camera calibration. NASA also performs spectral and radiometric calibration of its multispectral scanners at several locations (i.e., AMES, JPL, Goddard, Stennis, and the University of Arizona). Spectral and radiometric calibration is conducted in the calibration laboratory as well as in-situ calibrations using ground and airborne calibration equipment.
NASA’s Commercial Remote Sensing Program at the Stennis Space Center is conducting a verification and validation project. The project is a collaborative effort with industry to verify the performance of airborne and spaceborne remote sensing systems. A 150m x 150m x 130m "target pad" is one of several tools being developed for sensor validation. Other NASA calibration facilities also perform spectral and radiometric calibrations for other agencies and commercial companies on a case-by-case basis. For example, the calibration facility at NASA Ames has provided multispectral scanner calibration for DOE’s Nevada Test Site as well as Space Imaging EOSAT, Inc.
DOD-Department of Defense.
Over the years, various DOD agencies have created resolution test ranges for evaluating their camera systems. Many of these sites have been abandoned because of the high maintenance costs and the lack of use. However, a need for a photogrammetric test range still exists; to evaluate new technologies and for in situ testing of both airborne and spaceborne systems. In researching the calibration activities within DOD, there was mention of a resolution test target at Travis Air Force Base, used as a part of the Open Skies Project. There was also a mention of a high resolution elevation test range being established by the National Imagery Mapping Agency (NIMA) at DOE’s Nevada Test Site. These leads should be followed for potential use for photogrammetric testing and calibration. Contact with NIMA and other defense organizations would be useful to take advantage of work they have already done and to coordinate any new calibration efforts.
NIST-National Institute of Standards and Technology.
The NIST, a non-regulatory agency of the U.S. Commerce Department’s Technology Administration, was established to support industry, commerce, scientific institutions, and all branches of government. The NIST Labs further the technical aims and capabilities of U.S. industry and serve as an impartial source of expertise, developing highly leveraged measurement capabilities and other infrastructural technologies.
Under the direction of Technology Services, NIST provides a variety of services to help U.S. industry, government agencies, academia, and the public to improve the quality, reduce cost, and strengthen competitiveness of products. Services include calibration, laboratory accreditation services, and coordination of metric usage. Calibration laboratories and testing facilities may be accredited by NIST under the National Voluntary Laboratory Accreditation Program.
GIDEP-Government-Industry Data Exchange Program
GIDEP is a cooperative activity between government and industry participants seeking to reduce or eliminate expenditures of resources by making maximum use of existing information. The program provides a media to exchange technical information essential during research, design, development, production and operational phase of the life cycle of systems, facilities, and equipment. Participants in GIDEP are provided electronic access to the following six major types of data: 1) engineering data, 2) failure experience data, 3) metrology data, 4) production information data, 5) reliability and maintainability data, and 6) urgent data request system.
Independent testing organizations
Nearly every consumer product can be tested by an independent organization for safety, for meeting specifications, for functionality, for failure analysis, or for anything else. Independent testing is generally desirable by both manufacturers and consumers because of the appearance of objectivity. It is also good business practice and good advertising.
UL-Underwriters Laboratories Inc.
UL is an independent, not-for-profit product safety testing and certification organization established in 1894. Building on its undisputed reputation and its household name recognition, UL and its family of companies is one of the most recognized, reputable, conformity assessment providers in the world. Companies submit products for testing and certification because consumers often look for the UL Mark on the products they buy. Consumer confidence in the quality of the product is derived from the reputation of UL and trusting that UL has thoroughly tested a product before it receives certification.
There are numerous independent testing companies for testing specific products, personnel, laboratories, and system failures. However, there are no firms whose sole purpose is to test geospatial products, to evaluate private photogrammetric companies, to appraise operating procedures, to analyze new techniques, or to ensure the quality of map products. There is probably no market for such a company to exist because such limited work would presumably be performed by other practitioners or performed by individual companies as a part of doing business.
Equipment manufacturers
There are clear advantages in having equipment manufacturers provide the necessary camera or sensor calibration for the photogrammetric community. Manufacturers have a complete understanding of how the equipment was designed and its performance characteristics. They also have highly trained and knowledgeable personnel for their products and customized test equipment. Manufacturers already have calibration capabilities in order to test and to perform quality assurance (Pacey; Scheidt; Walker, 1999) on their respective products. This should serve to keep the cost of calibration to a minimum.
Camera manufacturers clearly have procedures and custom equipment to test the quality of their cameras and to perform camera calibration. While there is a concern of objectivity of equipment manufacturers in performing camera calibrations for the photogrammetric community, there are advantages in having state-of-the-art, customized testing equipment as well has trained personnel with first-hand knowledge of the latest equipment. Any concerns about objectivity would be overcome by an oversight committee composed of experts from government agencies, various standards groups, or by the ASPRS. The procedures and fee schedule can be also be set by oversight committee.
The testing and calibration of other sensors is probably more varied among the various manufacturers than for analog frame cameras. However, larger sensor manufacturers probably have similar testing facilities within their respective companies that are customized to test the quality of their products. Calibration of the many types of sensors in existence today and new technologies in the future may be more effectively performed by the manufacturers themselves under an ISO quality assurance model. It should be noted that smaller companies manufacturing sensor equipment may not have any calibration capabilities and either depend on others to calibrate their products or do no calibration at all.
Professional societies
Many professional societies serve to ensure the integrity of their profession through professional examinations, licensing, registration, certification program, review boards, training, and standards development. The ASPRS can play a significant role in defining standards for calibration, overseeing the infrastructure, and representing the U.S. position on sensor calibration on the international front through ISPRS.
ASPRS-American Society for Photogrammetry and Remote Sensing.
ASPRS is the premier professional organization for the photogrammetry and remote sensing profession. ASPRS, like many other professional organizations, has instituted a certification program for practicing professionals. ASPRS could extend its purview to include the certification of other elements such as sensor calibration, instrumentation, standards development, and operating procedures.
ISPRS-International Society for Photogrammetry and Remote Sensing.
While international professional certification may not be practical or desirable, worldwide acceptance of calibration standards and specifications can be enhanced and extended under the auspices of ISPRS.
ASQC-American Society for Quality Control.
ASQC, founded in 1946, is a society of individual and organizational member dedicated to the ongoing, development, advancement, and promotion of quality concepts, principles, and techniques. Most of the quality methods in use, such as statistical process control, quality cost measurement and control, total quality management, and zero defects, were initiated by ASQC members.
Standards organizations
Standards organizations certainly have a role in the definition and acceptance of standards for camera and sensor calibration for the mapping community. In some standards organizations it may not be appropriate to include camera and sensor calibration while other standard organizations may not be interested because of the relatively few numbers in the profession. At this point, the following standards organizations are identified, but have not been thoroughly studied to determine any appropriate subcommittee, contacts, or process for placement of any standards/calibration activity.
ANSI-American National Standards Institute.
ANSI is a private non-profit organization that administers and coordinates the U.S. voluntary standardization system. Its mission is to enhance U.S. global competitiveness by promoting, facilitating, and safeguarding the integrity of the voluntary standardization system. ANSI is the official U.S. representative to the ISO (see below).
ASTM-American Society for Testing Materials.
ASTM, organized in 1898, a non-profit organization that provides a forum for producers, users, consumers, government, and academia to meet and develop standards for materials, products, systems, and services. Through the work of the standards-writing committees, ASTM publishes standard test methods, specifications, practices, guides, classifications, and terminology. ASTM headquarters has no technical research or testing facilities; such work is done voluntarily by 35,000 technically qualified ASTM members located throughout the world.
ISO-International Standards Organization for Standardization.
ISO is a non-governmental organization established in 1947 and is a worldwide federation of national standards bodies from some 90 countries. Its mission is to promote the development of standardization in order to facilitate the international exchange of goods and services. ISO’s work results in international agreements that are published as International Standards. An International Standard is a result of an agreement among the member bodies of ISO. It may be used as such, or may be implemented through incorporation in national standards of different countries. ISO can also provide the quality assurance of photogrammetric products and services through the ISO 9000 certification process. The ISO 9000 certification process can also be applied to equipment manufacturers.
NIST-National Institute of Standards and Technology.
NIST Measurement and Standards Laboratories maintain more than 1,300 different Standard Reference Materials against which companies, government agencies, and others can use to check the accuracy of the most exacting measurements. Since photogrammetry relies on making precise measurements, it seems logical that NIST could play a role in the calibration process.
Summary & Recommendations
As stated in the Overview, this report is submitted in response to the Panel charge and associated questions. The report discusses the purpose of geometric calibration and the need for standards and specifications. Technology issues, including imaging technology developments, acceptance of those developments by the community and trends in support systems, are discussed. General calibration methods, current practices in the USA and the international community, and a discussion of calibration infrastructure options are also included.
The seven recommendations below are submitted in order to emphasize the following requirements: an independent calibration capability for analog cameras, a move towards a digital camera calibration capability in the near future, the establishment of in situ calibration procedures to support system level calibration and satellite borne systems, the development of a national and international calibration standard.
1. The USGS Optical Science Laboratory (OSL) should continue to calibrate film mapping cameras using the present calibrator and the Simultaneous Multiframe Analytical Calibration (SMAC)program. In addition, the USGS should:
|
Why the USGS?
Current mapping methods have evolved through many decades. With few exceptions, mapping activities in the United States depend on USGS provided calibration results. Calibrations of aerial cameras are performed with every care given to reporting the true photogrammetric capabilities of a camera. Customers using the USGS rely on this report of calibration to initiate their photogrammetric mapping and GIS production processes. In addition, to ensure that the quality of geospatial data from aerial imagery is maintained, a 3-year re-calibration cycle for airborne collection systems is required by most governmental contracting organizations. This information forms the spatial link between metric quality photographs and topography. To disrupt this provision of essential information, in favor of a non-USGS alternative, would cause most mapping activities to pause while they adapt to new relationships between the photo and the ground.
As the primary civilian mapping agency in the U.S., the USGS is the only independent agency in the United States with an accurate, independent, aerial, camera calibration facility. As such, the USGS maintains a "disinterest" in the results of the calibration, having no vested interest in the outcomes of the calibrations.
Community input to the Panel, both through the website and directly to the individual members, was vigorous in the call for the continuation of a "neutral" calibration service. Support of the services provided by the OSL was equally strong. The following quote is taken from a letter from the Management Association for Private Photogrammetric Surveyors (MAPPS) to the USGS on the subject of camera calibration:
2. Initiate the design, development, and implementation of a digital, camera calibration capability at the USGS (estimated required investment - $4 M).
|
3. Conduct the following research efforts in order to support a reliable and cost effective transition to digital acquisition systems (estimated required investment - $1 M):
|
4. Initiate the design, development and implementation of an in situ (flight) calibration process.
|
5. A calibration/verification process must be established for satellite imagery.
|
|
7. Adequate funding should be sought in order to ensure the continued operation of the Optical Science Laboratory (OSL), as well as to provide for the improvements and extensions described in the preceding recommendations. |
REFERENCES
Brown, D.C. (1969) "Advanced Methods for the Calibration of Metric Cameras", DBA Report Presented at the Symposium on Computational Photogrammetry, SUNY at Syracuse University, January 1969
Brown, D. C. (1974) "Evolution, Application and Potential of the Bundle Method of Photogrammetric Triangulation", Symposium in Stuttgart, ISP, Commission 3., p 69, Sept. 2-6, 1974
Eisenhart, Churchill (1963) "Realistic Evaluation of the Precision and Accuracy of Instrument Calibration Systems", Journal of Research of the National Bureau of Standards, Vol. 67C, No. 2, April/June, 1963
Merchant, D. (1995)"Testing Calibrations In Application To Airborne GPS Controlled Photogrammetry", Final Report, Topo Photo Inc., for USGS National Mapping Division, Reston VA., March 28, 1995
Pacey, Roger; Scheidt, Michael; Walker, A. Stewart (1999) "Calibration of analog and digital airborne sensors at LH Systems". Paper presented at 1999 ASPRS Annual Conference, Portland, Oregon, May 17-21, 1999
http://www.asprs.org,
please send for assistance select here.
(22
February 2000)