Operator Reference

vector_to_essential_matrixT_vector_to_essential_matrixVectorToEssentialMatrixVectorToEssentialMatrixvector_to_essential_matrix (Operator)

vector_to_essential_matrixT_vector_to_essential_matrixVectorToEssentialMatrixVectorToEssentialMatrixvector_to_essential_matrix — Compute the essential matrix given image point correspondences and known camera matrices and reconstruct 3D points.

Signature

vector_to_essential_matrix( : : Rows1, Cols1, Rows2, Cols2, CovRR1, CovRC1, CovCC1, CovRR2, CovRC2, CovCC2, CamMat1, CamMat2, Method : EMatrix, CovEMat, Error, X, Y, Z, CovXYZ)

Herror T_vector_to_essential_matrix(const Htuple Rows1, const Htuple Cols1, const Htuple Rows2, const Htuple Cols2, const Htuple CovRR1, const Htuple CovRC1, const Htuple CovCC1, const Htuple CovRR2, const Htuple CovRC2, const Htuple CovCC2, const Htuple CamMat1, const Htuple CamMat2, const Htuple Method, Htuple* EMatrix, Htuple* CovEMat, Htuple* Error, Htuple* X, Htuple* Y, Htuple* Z, Htuple* CovXYZ)

void VectorToEssentialMatrix(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HTuple& CamMat1, const HTuple& CamMat2, const HTuple& Method, HTuple* EMatrix, HTuple* CovEMat, HTuple* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* CovXYZ)

HHomMat2D HHomMat2D::VectorToEssentialMatrix(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HHomMat2D& CamMat2, const HString& Method, HTuple* CovEMat, HTuple* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* CovXYZ) const

HHomMat2D HHomMat2D::VectorToEssentialMatrix(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HHomMat2D& CamMat2, const HString& Method, HTuple* CovEMat, double* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* CovXYZ) const

HHomMat2D HHomMat2D::VectorToEssentialMatrix(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HHomMat2D& CamMat2, const char* Method, HTuple* CovEMat, double* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* CovXYZ) const

HHomMat2D HHomMat2D::VectorToEssentialMatrix(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HHomMat2D& CamMat2, const wchar_t* Method, HTuple* CovEMat, double* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* CovXYZ) const   ( Windows only)

static void HOperatorSet.VectorToEssentialMatrix(HTuple rows1, HTuple cols1, HTuple rows2, HTuple cols2, HTuple covRR1, HTuple covRC1, HTuple covCC1, HTuple covRR2, HTuple covRC2, HTuple covCC2, HTuple camMat1, HTuple camMat2, HTuple method, out HTuple EMatrix, out HTuple covEMat, out HTuple error, out HTuple x, out HTuple y, out HTuple z, out HTuple covXYZ)

HHomMat2D HHomMat2D.VectorToEssentialMatrix(HTuple rows1, HTuple cols1, HTuple rows2, HTuple cols2, HTuple covRR1, HTuple covRC1, HTuple covCC1, HTuple covRR2, HTuple covRC2, HTuple covCC2, HHomMat2D camMat2, string method, out HTuple covEMat, out HTuple error, out HTuple x, out HTuple y, out HTuple z, out HTuple covXYZ)

HHomMat2D HHomMat2D.VectorToEssentialMatrix(HTuple rows1, HTuple cols1, HTuple rows2, HTuple cols2, HTuple covRR1, HTuple covRC1, HTuple covCC1, HTuple covRR2, HTuple covRC2, HTuple covCC2, HHomMat2D camMat2, string method, out HTuple covEMat, out double error, out HTuple x, out HTuple y, out HTuple z, out HTuple covXYZ)

def vector_to_essential_matrix(rows_1: Sequence[Union[float, int]], cols_1: Sequence[Union[float, int]], rows_2: Sequence[Union[float, int]], cols_2: Sequence[Union[float, int]], cov_rr1: Sequence[Union[float, int]], cov_rc1: Sequence[Union[float, int]], cov_cc1: Sequence[Union[float, int]], cov_rr2: Sequence[Union[float, int]], cov_rc2: Sequence[Union[float, int]], cov_cc2: Sequence[Union[float, int]], cam_mat_1: Sequence[Union[float, int]], cam_mat_2: Sequence[Union[float, int]], method: str) -> Tuple[Sequence[float], Sequence[float], Sequence[float], Sequence[float], Sequence[float], Sequence[float], Sequence[float]]

def vector_to_essential_matrix_s(rows_1: Sequence[Union[float, int]], cols_1: Sequence[Union[float, int]], rows_2: Sequence[Union[float, int]], cols_2: Sequence[Union[float, int]], cov_rr1: Sequence[Union[float, int]], cov_rc1: Sequence[Union[float, int]], cov_cc1: Sequence[Union[float, int]], cov_rr2: Sequence[Union[float, int]], cov_rc2: Sequence[Union[float, int]], cov_cc2: Sequence[Union[float, int]], cam_mat_1: Sequence[Union[float, int]], cam_mat_2: Sequence[Union[float, int]], method: str) -> Tuple[Sequence[float], Sequence[float], float, Sequence[float], Sequence[float], Sequence[float], Sequence[float]]

Description

For a stereo configuration with known camera matrices the geometric relation between the two images is defined by the essential matrix. The operator vector_to_essential_matrixvector_to_essential_matrixVectorToEssentialMatrixVectorToEssentialMatrixvector_to_essential_matrix determines the essential matrix EMatrixEMatrixEMatrixEMatrixematrix from in general at least six given point correspondences, that fulfill the epipolar constraint:

The operator vector_to_essential_matrixvector_to_essential_matrixVectorToEssentialMatrixVectorToEssentialMatrixvector_to_essential_matrix is designed to deal only with a linear camera model. This is in contrast to the operator vector_to_rel_posevector_to_rel_poseVectorToRelPoseVectorToRelPosevector_to_rel_pose, that encompasses lens distortions too. The internal camera parameters are passed by the arguments CamMat1CamMat1CamMat1camMat1cam_mat_1 and CamMat2CamMat2CamMat2camMat2cam_mat_2, which are 3x3 upper triangular matrices describing an affine transformation. The relation between the vector (X,Y,1), defining the direction from the camera to the viewed 3D point, and its (projective) 2D image coordinates (col,row,1) is:

The focal length is denoted by f, are scaling factors, s describes a skew factor and indicates the principal point. Mainly, these are the elements known from the camera parameters as used for example in calibrate_camerascalibrate_camerasCalibrateCamerasCalibrateCamerascalibrate_cameras. Alternatively, the elements of the camera matrix can be described in a different way, see e.g. stationary_camera_self_calibrationstationary_camera_self_calibrationStationaryCameraSelfCalibrationStationaryCameraSelfCalibrationstationary_camera_self_calibration.

The point correspondences (Rows1Rows1Rows1rows1rows_1,Cols1Cols1Cols1cols1cols_1) and (Rows2Rows2Rows2rows2rows_2,Cols2Cols2Cols2cols2cols_2) are typically found by applying the operator match_essential_matrix_ransacmatch_essential_matrix_ransacMatchEssentialMatrixRansacMatchEssentialMatrixRansacmatch_essential_matrix_ransac. Multiplying the image coordinates by the inverse of the camera matrices results in the 3D direction vectors, which can then be inserted in the epipolar constraint.

The parameter MethodMethodMethodmethodmethod decides whether the relative orientation between the cameras is of a special type and which algorithm is to be applied for its computation. If MethodMethodMethodmethodmethod is either 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt" or 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard" the relative orientation is arbitrary. Choosing 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt" or 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard" means that the relative motion between the cameras is a pure translation. The typical application for this special motion case is the scenario of a single fixed camera looking onto a moving conveyor belt. In this case the minimum required number of corresponding points is just two instead of six in the general case.

The essential matrix is computed by a linear algorithm if 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt" or 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt" is chosen. With 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard" or 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard" the algorithm gives a statistically optimal result. Here, 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt" and 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard" stand for direct-linear-transformation and gold-standard-algorithm respectively. All methods return the coordinates (XXXxx,YYYyy,ZZZzz) of the reconstructed 3D points. The optimal methods also return the covariances of the 3D points in CovXYZCovXYZCovXYZcovXYZcov_xyz. Let n be the number of points then the 3x3 covariance matrices are concatenated and stored in a tuple of length 9n. Additionally, the optimal methods return the covariance of the essential matrix CovEMatCovEMatCovEMatcovEMatcov_emat.

If an optimal gold-standard-algorithm is chosen the covariances of the image points (CovRR1CovRR1CovRR1covRR1cov_rr1, CovRC1CovRC1CovRC1covRC1cov_rc1, CovCC1CovCC1CovCC1covCC1cov_cc1, CovRR2CovRR2CovRR2covRR2cov_rr2, CovRC2CovRC2CovRC2covRC2cov_rc2, CovCC2CovCC2CovCC2covCC2cov_cc2) can be incorporated in the computation. They can be provided for example by the operator points_foerstnerpoints_foerstnerPointsFoerstnerPointsFoerstnerpoints_foerstner. If the point covariances are unknown, which is the default, empty tuples are input. In this case the optimization algorithm internally assumes uniform and equal covariances for all points.

The value ErrorErrorErrorerrorerror indicates the overall quality of the optimization process and is the root-mean-square Euclidean distance in pixels between the points and their corresponding epipolar lines.

For the operator vector_to_essential_matrixvector_to_essential_matrixVectorToEssentialMatrixVectorToEssentialMatrixvector_to_essential_matrix a special configuration of scene points and cameras exists: if all 3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution in the essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by the operator. This means that all output parameters are of double length and the values of the second solution are simply concatenated behind the values of the first one.

Execution Information

  • Multithreading type: reentrant (runs in parallel with non-exclusive operators).
  • Multithreading scope: global (may be called from any thread).
  • Processed without parallelization.

Parameters

Rows1Rows1Rows1rows1rows_1 (input_control)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Input points in image 1 (row coordinate).

Restriction: length(Rows1) >= 6 || length(Rows1) >= 2

Cols1Cols1Cols1cols1cols_1 (input_control)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Input points in image 1 (column coordinate).

Restriction: length(Cols1) == length(Rows1)

Rows2Rows2Rows2rows2rows_2 (input_control)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Input points in image 2 (row coordinate).

Restriction: length(Rows2) == length(Rows1)

Cols2Cols2Cols2cols2cols_2 (input_control)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Input points in image 2 (column coordinate).

Restriction: length(Cols2) == length(Rows1)

CovRR1CovRR1CovRR1covRR1cov_rr1 (input_control)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Row coordinate variance of the points in image 1.

Default: []

CovRC1CovRC1CovRC1covRC1cov_rc1 (input_control)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Covariance of the points in image 1.

Default: []

CovCC1CovCC1CovCC1covCC1cov_cc1 (input_control)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Column coordinate variance of the points in image 1.

Default: []

CovRR2CovRR2CovRR2covRR2cov_rr2 (input_control)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Row coordinate variance of the points in image 2.

Default: []

CovRC2CovRC2CovRC2covRC2cov_rc2 (input_control)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Covariance of the points in image 2.

Default: []

CovCC2CovCC2CovCC2covCC2cov_cc2 (input_control)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Column coordinate variance of the points in image 2.

Default: []

CamMat1CamMat1CamMat1camMat1cam_mat_1 (input_control)  hom_mat2d HHomMat2D, HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Camera matrix of the 1st camera.

CamMat2CamMat2CamMat2camMat2cam_mat_2 (input_control)  hom_mat2d HHomMat2D, HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Camera matrix of the 2nd camera.

MethodMethodMethodmethodmethod (input_control)  string HTuplestrHTupleHtuple (string) (string) (HString) (char*)

Algorithm for the computation of the essential matrix and for special camera orientations.

Default: 'normalized_dlt' "normalized_dlt" "normalized_dlt" "normalized_dlt" "normalized_dlt"

List of values: 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard", 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt", 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard", 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt"

EMatrixEMatrixEMatrixEMatrixematrix (output_control)  hom_mat2d HHomMat2D, HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Computed essential matrix.

CovEMatCovEMatCovEMatcovEMatcov_emat (output_control)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

9x9 covariance matrix of the essential matrix.

ErrorErrorErrorerrorerror (output_control)  real(-array) HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Root-Mean-Square of the epipolar distance error.

XXXxx (output_control)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

X coordinates of the reconstructed 3D points.

YYYyy (output_control)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Y coordinates of the reconstructed 3D points.

ZZZzz (output_control)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Z coordinates of the reconstructed 3D points.

CovXYZCovXYZCovXYZcovXYZcov_xyz (output_control)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Covariance matrices of the reconstructed 3D points.

Possible Predecessors

match_essential_matrix_ransacmatch_essential_matrix_ransacMatchEssentialMatrixRansacMatchEssentialMatrixRansacmatch_essential_matrix_ransac

Possible Successors

essential_to_fundamental_matrixessential_to_fundamental_matrixEssentialToFundamentalMatrixEssentialToFundamentalMatrixessential_to_fundamental_matrix

Alternatives

vector_to_rel_posevector_to_rel_poseVectorToRelPoseVectorToRelPosevector_to_rel_pose, vector_to_fundamental_matrixvector_to_fundamental_matrixVectorToFundamentalMatrixVectorToFundamentalMatrixvector_to_fundamental_matrix

See also

stationary_camera_self_calibrationstationary_camera_self_calibrationStationaryCameraSelfCalibrationStationaryCameraSelfCalibrationstationary_camera_self_calibration

References

Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press, Cambridge; 2003.
J.Chris McGlone (editor): “Manual of Photogrammetry”; American Society for Photogrammetry and Remote Sensing ; 2004.

Module

3D Metrology