Operator Reference
stationary_camera_self_calibration (Operator)
stationary_camera_self_calibration
— Perform a self-calibration of a stationary projective camera.
Signature
stationary_camera_self_calibration( : : NumImages, ImageWidth, ImageHeight, ReferenceImage, MappingSource, MappingDest, HomMatrices2D, Rows1, Cols1, Rows2, Cols2, NumCorrespondences, EstimationMethod, CameraModel, FixedCameraParams : CameraMatrices, Kappa, RotationMatrices, X, Y, Z, Error)
Description
stationary_camera_self_calibration
performs a
self-calibration of a stationary projective camera. Here,
stationary means that the camera may only rotate around the optical
center and may zoom. Hence, the optical center may not move.
Projective means that the camera model is a pinhole camera that can
be described by a projective 3D-2D transformation. In particular,
radial distortions can only be modeled for cameras with constant
parameters. If the lens exhibits significant radial distortions
they should be removed, at least approximately, with
change_radial_distortion_image
.
The camera model being used can be described as follows:
Here, x is a homogeneous 2D vector,
X a homogeneous 3D vector, and P
a homogeneous 3x4 projection matrix. The projection
matrix P can be decomposed as follows:
Here, R is a 3x3 rotation matrix
and t is an inhomogeneous 3D vector. These two
entities describe the position (pose) of the camera in 3D space.
This convention is analogous to the convention used in
camera_calibration
, i.e., for
R=I and t=0 the x
axis points to the right, the y axis downwards, and the z axis
points forward. K is the calibration matrix of
the camera (the camera matrix) which can be described as follows:
Here, f is the focal length of the camera in pixels, a the
aspect ratio of the pixels, s is a factor that models the skew of
the image axes, and (u,v) is the principal point of the camera in
pixels. In this convention, the x axis corresponds to the column
axis and the y axis to the row axis.
Since the camera is stationary, it can be assumed that
t=0. With this convention, it is easy to see
that the fourth coordinate of the homogeneous 3D vector
X has no influence on the position of the
projected 3D point. Consequently, the fourth coordinate can be set
to 0, and it can be seen that X can be regarded as
a point at infinity, and hence represents a direction in 3D. With
this convention, the fourth coordinate of X can be
omitted, and hence X can be regarded as
inhomogeneous 3D vector which can only be determined up to scale
since it represents a direction. With this, the above projection
equation can be written as follows:
If two images of the same point are taken with a stationary camera,
the following equations hold:
and consequently
If the camera parameters do not change when taking the two images,
holds. Because of the above, the two
images of the same 3D point are related by a projective 2D
transformation. This transformation can be determined with
proj_match_points_ransac
. It needs to be taken into account
that the order of the coordinates of the projective 2D
transformations in HALCON is the opposite of the above convention.
Furthermore, it needs to be taken into account that
proj_match_points_ransac
uses a coordinate system in which
the origin of a pixel lies in the upper left corner of the pixel,
whereas stationary_camera_self_calibration
uses a coordinate
system that corresponds to the definition used in
camera_calibration
, in which the origin of a pixel lies in
the center of the pixel. For projective 2D transformations that are
determined with proj_match_points_ransac
the rows and
columns must be exchanged and a translation of (0.5,0.5) must be
applied. Hence, instead of the
following equations hold in HALCON:
and
From the above equation, constraints on the camera parameters can be
derived in two ways. First, the rotation can be eliminated from the
above equation, leading to equations that relate the camera matrices
with the projective 2D transformation between the two images. Let
be the projective transformation from
image i to image j. Then,
From the second equation, linear constraints on the camera
parameters can be derived. This method is used for
EstimationMethod
= 'linear' . Here, all source
images i given by MappingSource
and all destination
images j given by MappingDest
are used to compute
constraints on the camera parameters. After the camera parameters
have been determined from these constraints, the rotation of the
camera in the respective images can be determined based on the
equation and by constructing a chain of
transformations from the reference image ReferenceImage
to
the respective image. From the first equation above, a nonlinear
method to determine the camera parameters can be derived by
minimizing the following error:
Here, analogously to the linear method, is
the set of overlapping images specified by MappingSource
and MappingDest
. This method is used for
EstimationMethod
= 'nonlinear' . To start the
minimization, the camera parameters are initialized with the results
of the linear method. These two methods are very fast and return
acceptable results if the projective 2D transformations
are sufficiently accurate. For this, it
is essential that the images do not have radial distortions. It can
also be seen that in the above two methods the camera parameters are
determined independently from the rotation parameters, and
consequently the possible constraints are not fully exploited. In
particular, it can be seen that it is not enforced that the
projections of the same 3D point lie close to each other in all
images. Therefore, stationary_camera_self_calibration
offers a complete bundle adjustment as a third method
(EstimationMethod
= 'gold_standard' ). Here, the
camera parameters and rotations as well as the directions in 3D
corresponding to the image points (denoted by the vectors
X above), are determined in a single optimization
by minimizing the following error:
In this equation, only the terms for which the reconstructed
direction is visible in image i are taken
into account. The starting values for the parameters in the bundle
adjustment are derived from the results of the nonlinear method.
Because of the high complexity of the minimization the bundle
adjustment requires a significantly longer execution time than the
two simpler methods. Nevertheless, because the bundle adjustment
results in significantly better results, it should be preferred.
In each of the three methods the camera parameters that should be
computed can be specified. The remaining parameters are set to a
constant value. Which parameters should be computed is determined
with the parameter CameraModel
which contains a tuple of
values. CameraModel
must always contain the value
'focus' that specifies that the focal length f is
computed. If CameraModel
contains the value
'principal_point' the principal point (u,v) of the camera
is computed. If not, the principal point is set to
(ImageWidth
/2,ImageHeight
/2). If
CameraModel
contains the value 'aspect' the aspect
ratio a of the pixels is determined, otherwise it is set to 1. If
CameraModel
contains the value 'skew' the skew of
the image axes is determined, otherwise it is set to 0. Only the
following combinations of the parameters are allowed:
'focus' ,
['focus', 'principal_point'] ,
['focus', 'aspect'] ,
['focus', 'principal_point', 'aspect'] , and
['focus', 'principal_point', 'aspect', 'skew'] .
Additionally, it is possible to determine the parameter
Kappa
, which models radial lens distortions, if
EstimationMethod
= 'gold_standard' has been
selected. In this case, 'kappa' can also be included in
the parameter CameraModel
. Kappa
corresponds to
the radial distortion parameter of the division
model for lens distortions (see camera_calibration
).
When using EstimationMethod
= 'gold_standard' to
determine the principal point, it is possible to penalize
estimations far away from the image center. This can be done by
adding a sigma to the parameter 'principal_point:0.5' . If
no sigma is given the penalty term in the above equation for
calculating the error is omitted.
The parameter FixedCameraParams
determines whether the
camera parameters can change in each image or whether they should be
assumed constant for all images. To calibrate a camera so that it
can later be used for measuring with the calibrated camera, only
FixedCameraParams
= 'true' is useful. The mode
FixedCameraParams
= 'false' is mainly useful to
compute spherical mosaics with gen_spherical_mosaic
if the
camera zoomed or if the focus changed significantly when the mosaic
images were taken. If a mosaic with constant camera parameters
should be computed, of course FixedCameraParams
=
'true' should be used. It should be noted that for
FixedCameraParams
= 'false' the camera
calibration problem is determined very badly, especially for long
focal lengths. In these cases, often only the focal length can be
determined. Therefore, it may be necessary to use
CameraModel
= 'focus' or to constrain the
position of the principal point by using a small Sigma for the
penalty term for the principal point.
The number of images that are used for the calibration is passed in
NumImages
. Based on the number of images, several
constraints for the camera model must be observed. If only two
images are used, even under the assumption of constant parameters
not all camera parameters can be determined. In this case, the skew
of the image axes should be set to 0 by not adding
'skew' to CameraModel
. If
FixedCameraParams
= 'false' is used, the full
set of camera parameters can never be determined, no matter how many
images are used. In this case, the skew should be set to 0 as well.
Furthermore, it should be noted that the aspect ratio can only be
determined accurately if at least one image is rotated around the
optical axis (the z axis of the camera coordinate system) with
respect to the other images. If this is not the case the
computation of the aspect ratio should be suppressed by not
adding 'aspect' to CameraModel
.
As described above, to calibrate the camera it is necessary that the
projective transformation for each overlapping image pair is
determined with proj_match_points_ransac
. For example, for
a 2x2 block of images in the following layout
1 | 2 |
3 | 4 |
the following projective transformations should be determined,
assuming that all images overlap each other: 1->2,
1->3, 1->4, 2->3,
2->4 and 3->4. The indices of the
images that determine the respective transformation are given by
MappingSource
and MappingDest
. The indices are
start at 1. Consequently, in the above example
MappingSource
= [1,1,1,2,2,3] and
MappingDest
= [2,3,4,3,4,4] must be used. The
number of images in the mosaic is given by NumImages
. It
is used to check whether each image can be reached by a chain of
transformations. The index of the reference image is given by
ReferenceImage
. On output, this image has the identity
matrix as its transformation matrix.
The 3x3 projective transformation matrices that
correspond to the image pairs are passed in HomMatrices2D
.
Additionally, the coordinates of the matched point pairs in the
image pairs must be passed in Rows1
, Cols1
,
Rows2
, and Cols2
. They can be determined from the
output of proj_match_points_ransac
with tuple_select
or with the HDevelop function subset
. To enable
stationary_camera_self_calibration
to determine which point
pair belongs to which image pair, NumCorrespondences
must
contain the number of found point matches for each image pair.
The computed camera matrices are returned in
CameraMatrices
as 3x3 matrices. For
FixedCameraParams
= 'false' , NumImages
matrices are returned. Since for FixedCameraParams
=
'true' all camera matrices are identical, a single camera
matrix is returned in this case. The computed rotations
are returned in RotationMatrices
as
3x3 matrices. RotationMatrices
always
contains NumImages
matrices.
If EstimationMethod
= 'gold_standard' is used,
(X
, Y
, Z
) contains the reconstructed
directions . In addition, Error
contains the average projection error of the reconstructed
directions. This can be used to check whether the optimization has
converged to useful values.
If the computed camera parameters are used to project 3D points or
3D directions into the image i the respective camera matrix should
be multiplied with the corresponding rotation matrix (with
hom_mat2d_compose
).
Execution Information
- Multithreading type: reentrant (runs in parallel with non-exclusive operators).
- Multithreading scope: global (may be called from any thread).
- Processed without parallelization.
Parameters
NumImages
(input_control) integer →
(integer)
Number of different images that are used for the calibration.
Restriction:
NumImages >= 2
ImageWidth
(input_control) extent.x →
(integer)
Width of the images from which the points were extracted.
Restriction:
ImageWidth > 0
ImageHeight
(input_control) extent.y →
(integer)
Height of the images from which the points were extracted.
Restriction:
ImageHeight > 0
ReferenceImage
(input_control) integer →
(integer)
Index of the reference image.
MappingSource
(input_control) integer-array →
(integer)
Indices of the source images of the transformations.
MappingDest
(input_control) integer-array →
(integer)
Indices of the target images of the transformations.
HomMatrices2D
(input_control) hom_mat2d-array →
(real)
Array of 3x3 projective transformation matrices.
Rows1
(input_control) point.y-array →
(real / integer)
Row coordinates of corresponding points in the respective source images.
Cols1
(input_control) point.x-array →
(real / integer)
Column coordinates of corresponding points in the respective source images.
Rows2
(input_control) point.y-array →
(real / integer)
Row coordinates of corresponding points in the respective destination images.
Cols2
(input_control) point.x-array →
(real / integer)
Column coordinates of corresponding points in the respective destination images.
NumCorrespondences
(input_control) integer-array →
(integer)
Number of point correspondences in the respective image pair.
EstimationMethod
(input_control) string →
(string)
Estimation algorithm for the calibration.
Default: 'gold_standard'
List of values: 'gold_standard' , 'linear' , 'nonlinear'
CameraModel
(input_control) string-array →
(string)
Camera model to be used.
Default: ['focus','principal_point']
List of values: 'aspect' , 'focus' , 'kappa' , 'principal_point' , 'skew'
FixedCameraParams
(input_control) string →
(string)
Are the camera parameters identical for all images?
Default: 'true'
List of values: 'false' , 'true'
CameraMatrices
(output_control) hom_mat2d-array →
(real)
(Array of) 3x3 projective camera matrices that determine the internal camera parameters.
Kappa
(output_control) real(-array) →
(real)
Radial distortion of the camera.
RotationMatrices
(output_control) hom_mat2d-array →
(real)
Array of 3x3 transformation matrices that determine rotation of the camera in the respective image.
X
(output_control) point3d.x-array →
(real)
X-Component of the direction vector of each point if
EstimationMethod
= 'gold_standard'
is used.
Y
(output_control) point3d.y-array →
(real)
Y-Component of the direction vector of each point if
EstimationMethod
= 'gold_standard'
is used.
Z
(output_control) point3d.z-array →
(real)
Z-Component of the direction vector of each point if
EstimationMethod
= 'gold_standard'
is used.
Error
(output_control) real(-array) →
(real)
Average error per reconstructed point if
EstimationMethod
=
'gold_standard' is used.
Example (HDevelop)
* Assume that Images contains four images in the layout given in the * above description. Then the following example performs the camera * self-calibration using these four images. From := [1,1,1,2,2,3] To := [2,3,4,3,4,4] HomMatrices2D := [] Rows1 := [] Cols1 := [] Rows2 := [] Cols2 := [] NumMatches := [] for J := 0 to |From|-1 by 1 select_obj (Images, ImageF, From[J]) select_obj (Images, ImageT, To[J]) points_foerstner (ImageF, 1, 2, 3, 100, 0.1, 'gauss', 'true', \ RowsF, ColsF, _, _, _, _, _, _, _, _) points_foerstner (ImageT, 1, 2, 3, 100, 0.1, 'gauss', 'true', \ RowsT, ColsT, _, _, _, _, _, _, _, _) proj_match_points_ransac (ImageF, ImageT, RowsF, ColsF, RowsT, ColsT, \ 'ncc', 10, 0, 0, 480, 640, 0, 0.5, \ 'gold_standard', 2, 42, HomMat2D, \ Points1, Points2) HomMatrices2D := [HomMatrices2D,HomMat2D] Rows1 := [Rows1,subset(RowsF,Points1)] Cols1 := [Cols1,subset(ColsF,Points1)] Rows2 := [Rows2,subset(RowsT,Points2)] Cols2 := [Cols2,subset(ColsT,Points2)] NumMatches := [NumMatches,|Points1|] endfor stationary_camera_self_calibration (4, 640, 480, 1, From, To, \ HomMatrices2D, Rows1, Cols1, \ Rows2, Cols2, NumMatches, \ 'gold_standard', \ ['focus','principal_point'], \ 'true', CameraMatrix, Kappa, \ RotationMatrices, X, Y, Z, Error)
Result
If the parameters are valid, the operator
stationary_camera_self_calibration
returns the value 2 (
H_MSG_TRUE)
.
If necessary an exception is raised.
Possible Predecessors
proj_match_points_ransac
,
proj_match_points_ransac_guided
Possible Successors
See also
References
Lourdes Agapito, E. Hayman, I. Reid: “Self-Calibration of Rotating and Zooming Cameras”; International Journal of Computer Vision; vol. 45, no. 2; pp. 107--127; 2001.
Module
Calibration