Photogrammetry is the technique of measuring objects (2D or 3D) from photo-grammes. We say commonly photographs, but it may be also imagery stored electronically on tape or disk taken by video or CCD cameras or radiation sensors such as scanners.
The results can be:
- coordinates of the required object-points
- topographical and thematical maps
- and rectified photographs (orthophoto).
Its most important feature is the fact, that the objects are measured without being touched. Therefore, the term "remote sensing" is used by some authors instead of "photogrammetry". Remote sensing is a rather young term, which was originally confined to working with aerial photographs and satellite images. Today, it includes also photogrammetry, although it is still associated rather with image interpretation.
Principally, photogrammetry can be divided into:
- Depending on the lense-setting:
- Far range photogrammetry (with camera distance setting to indefinite), and
- Close range photogrammetry (with camera distance settings to finite values).
- Another grouping can be
- Aerial photogrammetry (which is mostly far range photogrammetry), and
- Terrestrial Photogrammetry (mostly close range photogrammetry).
The applications of photogrammetry are widely spread. Principally, it is utilized for object interpretation (What is it Type Quality Quantity) and object measurement (Where is it Form Size).
Aerial photogrammetry is mainly used to produce topographical or thematical maps and digital terrain models. Among the users of close-range photogrammetry are architects and civil engineers (to supervise buildings, document their current state, deformations or damages), archaeologists, surgeons (plastic surgery) or police departments (documentation of traffic accidents and crime scenes), just to mention a few.
2. Brief History of Photogrammetry
1851: Only a decade after the invention of the Daguerrotypie by Daguerre and Niepce, the french officer Aime Laussedat develops the first photogrammetrical devices and methods. He is seen as the initiator of photogrammetry.
1858: The German architect A. Meydenbauer develops photogrammetrical techniques for the documentation of buildings and installs the first photogrammetric institute in 1885 (Royal Prussian Photogrammetric Institute).
1866: The Viennese physicist Ernst Mach publishes the idea to use the stereoscope to estimate volumetric measures.
1885: The ancient ruins of Persepolis were the first archaeological object recorded photogrammetrically.
1889: The first German manual of photogrammetry was published by C. Koppe.
1896: Eduard Gaston and Daniel Deville present the first stereoscopical instrument for vectorized mapping.
1897/98: Theodor Scheimpflug invents the double projection.
1901: Pulfrich creates the first Stereokomparator and revolutionates the mapping from stereopairs.
1903: Theodor Scheimpflug invents the Perspektograph, an instrument for optical rectification.
1910: The ISP (International Society for Photogrammetry), now ISPRS, was founded by E. Dolezal in Austria.
1911: The Austrian Th. Scheimpflug finds a way to create rectified photographs. He is considered as the initiator of aerial photogrammetry, since he was the first succeeding to apply the photogrammetrical principles to aerial photographs.
1913: The first congress of the ISP was held in Vienna. until 1945: development and improvment of measuring (=metric) cameras and analogue plotters.
1964: First architectural tests with the new stereometric camera-system, which had been invented by Carl Zeiss, Oberkochen and Hans Foramitti, Vienna.
1964: Charte de Venise.
1968: First international Symposium for photogrammetrical applications to historical monuments was held in Paris - Saint Mand
1970: Constitution of CIPA (Comit International de la Photogramme ie Architecturale) as one of the international specialized committees of ICOMOS (International Council on Monuments and Sites) in cooperation with ISPRS. The two main activists were Maurice Carbonnell, France, and Hans Foramitti, Austria.
1970ies: The analytical plotters, which were first used by U. Helava in 1957, revolutionate photogrammetry. They allow to apply more complex methods: aerotriangulation, bundle-adjustment, the use of amateur cameras etc.
1980ies: Due to improvements in computer hardware and software, digital photogrammetry is gaining more and more importance.
1996: 83 years after its first conference, the ISPRS comes back to Vienna, the town, where it was founded.
3. Short description of photogrammetrical techniques
3.1. Photographing Devices
A photographic image is a central perspective. This implies, that every light ray, which reached the film surface during exposure, passed through the camera lens (which is mathematically considered as a single point, the so called perspective center). In order to take measurements of objects from photographs, the ray bundle must be reconstructed. Therefore, the internal geometry of the used camera (which is defined by the focal length, the position of the principal point and the lens distortion) has to be precisely known. The focal length is called principal distance, which is the distance of the projection center from the image planes principal point. Depending on the availability of this knowledge, the photogrammetrist devides photographing devices into three categories:
3.1.1. Metric cameras
They have stable and precisely known internal geometries and very low lens distortions. Therefore, they are very expensive devices. The principal distance is constant, which means, that the lens cannot be sharpened when taking photographs. As a result, metric cameras are only usable within a limited range of distances towards the object. The image coordinate system is defined by (mostly) four fiducial marks, which are mounted on the frame of the camera. Terrestrial cameras can be combined with tripods and theodolites. Aerial metric cameras are built into aeroplanes mostly looking straight downwards. Today, all of them have an image format of 23 by 23 centimeters.
3.1.2. Stereometric camera
If an object is photographed from two different positions, the line between the two projection centers is called base. If both photographs have viewing directions, which are parallel to each other and in a right angle to the base (the so called normal case), then they have similar properties as the two images of our retinas. Therefore, the overlapping area of these two photographs (which are called a stereopair) can be seen in 3D, simulating mans stereoscopic vision.
In practice, a stereopair can be produced with a single camera from two positions or using a stereometric camera.
A stereometric camera in principle consists of two metric cameras mounted at both ends of a bar, which has a precisely measured length (mostly 40 or 120 cm). This bar is functioning as the base. Both cameras have the same geometric properties. Since they are adjusted to the normal case, stereopairs are created easily.
3.1.3. Amateur cameras
The photogrammetrist speaks of an amateur camera, when the internal geometry is not stable and unknown, as is the case with any normal commercially available camera. However, also these can be very expensive and technically highly developed professional photographic devices. Photographing a test field with many control points and at a repeatably fixed distance setting (for example at infiniy), a calibration of the camera can be calculated. In this case, the four corners of the camera frame function as fiducials. However, the precision will never reach that of metric cameras. Therefore, they can only be used for purposes, where no high accuracy is demanded. But in many practical cases such photography is better than nothing, and very useful in cases of emergency.
3.2. Photogrammetric Techniques
Depending on the available material (metric camera or not, stereopairs, shape of recorded object, control information...) and the required results (2D or 3D, accuracy...), different photogrammetric techniques can be applied. Depending on the number of photographs, three main-categories can be distinguished.
3.2.1. Mapping from a single photograph
Only useful for plane (2D) objects. Obliquely photographed plane objects show perspective deformations which have to be rectified. For rectification exists a broad range of techniques. Some of them are very simple. However, there are some limitations. To get good results even with the simple techniques, the object should be plane (as for example a wall), and since only a single photograph is used, the mappings can only be done in 2D
The rectification can be neglected, only if the object is flat and the picture is made from a vertical position towards the object. In this case, the photograph will have a unique scale factor, which can be determined, if the length of at least one distance at the object is known.
Very shortly, we will describe now some common techniques:
- Paper strip method
This is the cheapest method, since only a ruler, a piece of paper with a straight edge and a pencil are required. It was used during the last century. Four points must be identified in the picture and in a map.From one point, lines have to be drawn to the others (on the image and the map) and to the required object point (on the image). Then the paper strip is placed on the image and the intersections with the lines are marked. The strip is then placed on the map and adjusted such that the marks coincide again with the lines. After that, a line can be drawn on the map to the mark of the required object point. The whole process is repeated from another point, giving the object-point on the map as intersection of the two object-lines.
- Optical rectification
Is done using photographic enlargeners. These should fulfill the so called Scheimpflug condition and the vanishing-point condition. Again, at least four control points are required, not three on one line. The control points are plotted at a certain scale. The control point plot is rotated and displaced until two points match the corresponding object points from the projected image. After that, the table has to be tilted by two rotations, until the projected negative fits to all control points. Then an exposure is made and developed.
Again, the object has to be plane and four control points are required. At the numerical rectification, the image coordinates of the desired object-points are transformed into the desired coordinate system (which is again 2D). The result is the coordinates of the projected points. Differential rectification If the object is uneven, it has to be divided into smaller parts, which are plane. Each part can then be rectified with one of the techniques shown above. Of course, also even objects may be rectified piecewise, differentially. A prerequisite for differential rectification is the availability of a digital object model, i.e. a dense raster of points on the object with known distances from a reference plane; in aerial photogrammetry it is called a DTM (Digital Terrain Model).
This technique is similar to the numerical rectification, except that the coordinates are here transformed into a 3D coordinate system. First, the orientation elements, that are the coordinates of the projection center and the three angles defining the view of the photograph, are calculated by spatial resection. Then, using the calibration data of the camera, any ray, that came from the archaeological feature through the lense onto the photograph can be reconstructed and intersected with the digital terrain model.
- Digital rectification
The digital rectification is a rather new technique. It is somehow similar to monoplotting. But here, the scanned image is transformed pixel by pixel into the 3D real-world coordinate system. The result is an orthophoto, a rectified photograph, that has a unique scale.
As the term already implies, stereopairs are the basic requirement, here. These can be produced using stereometric cameras. If only a single camera is available, two photographs can be made from different positions, trying to match the conditions of the normal case. Vertical aerial photographs come mostly close to the normal case. They are made using special metric cameras, that are built into an aeroplane looking straight downwards. While taking the photographs, the aeroplane flies over a certain area in a meandric way, so that the whole area is covered by overlapping photographs. The overlapping part of each stereopair can be viewed in 3D and consequently mapped in 3D using one of following techniques:
The analogue method was mainly used until the 70ies of our century. Simply explained, the method tries to convert the recording procedure. Two projectors, which have the same geometric properties as the used camera (these can be set during the so called inner orientation), project the negatives of the stereopair. Their positions then have to be exactly rotated into the same relationship towards each other as at the moment of exposure (=relative orientation). After this step, the projected bundle of light rays from both photographs intersect with each other forming a (three dimensional optical) model. At last, the scale of this model has to be related to its true dimensions and the rotations and shifts in relation to the mapping (world) coordinate system are to be determined. Therefore, at least three control points, which are not on one straight line, are required (=absolute orientation).
The optical model is viewed by means of a stereoscope. The intersection of rays can then be measured point by point using a measuring mark. This consists of two marks, one on each photograph. When viewing the model, the two marks fuse into a 3D one, which can be moved and raised until the desired point of the 3D object is met. The movements of the mark are mechanically transmitted to a drawing device. In that way, maps are created.
The first analytical plotters were introduced in 1957. From the 1970ies on, they became commonly available on the market. The idea is still the same as with analogue instruments. But here, a computer manages the relationship between image- and real-world coordinates. The restitution of the stereopair is done within three steps:
After restoration of the "inner orientation", where the computer may now also correct for the distortion of the film, both pictures are relatively oriented. After this step, the pictures will be looked at in 3D. Then, the absolute orientation is performed, where the 3D model is transferred to the real- world coordinate system. Therefore, at least three control points are required.
After the orientation, any detail can be measured out of the stereomodel in 3D. Like in the analogue instrument, the model and a corresponding measuring mark are seen in 3D. The movements of the mark are under your control. The main difference to the former analogue plotting process is that the plotter doesnt plot any more directly onto the map but onto the monitors screen or into the database of the computer.
The analytical plotter uses the computer to calculate the real-world coordinates, which can be stored as an ASCII file or transferred on-line into CAD-programs. In that way, 3D drawings are created, which can be stored digitally, combined with other data and plotted later at any scale.
Digital techniques have become widely available during the last decade. Here, the images are not on film but digitally stored on tape or disc. Each picture element (pixel) has its known position and measured intensity value, only one for black/white, several such values for colour or multispectral images.
3.2.3. Mapping from several photographs
This kind of restitution, which can be done in 3D, has only become possible by analytical and digital photogrammetry. Since the required hard- and software is steadily getting cheaper, its application fields grow from day to day.
Here, mostly more than two photographs are used. 3D objects are photographed from several positions. These are located around the object, where any object-point should be visible on at least two, better three photographs. The photographs can be taken with different cameras (even amateur cameras) and at different times (if the object does not move).
As mentioned above, only analytical or digital techniques can be used.
During all methods, first a bundle adjustment has to be calculated. Using control points and triangulation points the geometry of the whole block of photographs is reconstructed with high pecision. Then the image coordinates of any desired object-point measured in at least two photographs can be intersected. The result are the coordinates of the required points.
In that way, the whole 3D object is digitally reconstructed.