Suppose three mutually orthogonal directions in the world, namely x, y and z. We assume for each direction there are at least two lines in the photo parallel to the direction. We indicate these 3 x 2 = 6 lines shown as below, which give sufficient information for Photo3D to determine the camera orientation.
||When an image is loaded, three pairs of line segments are
displayed against the background image in their initial position. Each
"end point" can be moved by pick and drag.
(The colored lines were retouched, because the original lines had been too thin to be clearly recognized in this shrinked image.)
||We move the line segments to edges in the background image.
With these information Photo3D automatically calculates the spatial relation between the camera and the world, or more specifically:
As a characteristic feature of Photo3D, it requires only information extracted from images. This eliminates the need to input camera parameters such as focal length or field of view angle, which are unknown in certain situations such as when zooming mechanisms are used, or when only the image is given without any additional parameters.
Moreover, as shown from the figure above, the center of the photo (cross point of image plane and optical axis of lens) doesn't need to coincide with the center of the image. This feature enables us even to use an image trimmed from some original photo without knowing which portion was extracted.
However, in many cases we can assume that the photo center coincides
with the image center or at least fix where the photo center is on the
image. In such a case Photo3D exploits the fact that the photo center is
known, and adds the information to the input for calibration. This flexibility
--- that position of photo center can be treated both as unknown to be
estimated or known and used as input --- increases the usability of Photo3D
without losing the precision of the calibration.