Operator Reference

apply_deep_matching_3dT_apply_deep_matching_3dApplyDeepMatching3dApplyDeepMatching3dapply_deep_matching_3d (Operator)

apply_deep_matching_3dT_apply_deep_matching_3dApplyDeepMatching3dApplyDeepMatching3dapply_deep_matching_3d — Find the pose of objects using Deep 3D Matching.

Signature

apply_deep_matching_3d(Images : : Deep3DMatchingModel : DeepMatchingResults)

Herror T_apply_deep_matching_3d(const Hobject Images, const Htuple Deep3DMatchingModel, Htuple* DeepMatchingResults)

void ApplyDeepMatching3d(const HObject& Images, const HTuple& Deep3DMatchingModel, HTuple* DeepMatchingResults)

HDictArray HDeepMatching3D::ApplyDeepMatching3d(const HImage& Images) const

static void HOperatorSet.ApplyDeepMatching3d(HObject images, HTuple deep3DMatchingModel, out HTuple deepMatchingResults)

HDict[] HDeepMatching3D.ApplyDeepMatching3d(HImage images)

def apply_deep_matching_3d(images: HObject, deep_3dmatching_model: HHandle) -> Sequence[HHandle]

Description

The operator apply_deep_matching_3dapply_deep_matching_3dApplyDeepMatching3dApplyDeepMatching3dapply_deep_matching_3d finds instances of the object defined in Deep3DMatchingModelDeep3DMatchingModelDeep3DMatchingModeldeep3DMatchingModeldeep_3dmatching_model in the images ImagesImagesImagesimagesimages and returns the detected instances and their 3D poses in DeepMatchingResultsDeepMatchingResultsDeepMatchingResultsdeepMatchingResultsdeep_matching_results.

Input Images

ImagesImagesImagesimagesimages must be an image array with exactly as many images as there are cameras set in the Deep 3D Matching model (see set_deep_matching_3d_paramset_deep_matching_3d_paramSetDeepMatching3dParamSetDeepMatching3dParamset_deep_matching_3d_param). The image resolutions must match the resolution of the corresponding camera parameters. The images must be either of type 'byte'"byte""byte""byte""byte" or 'float'"float""float""float""float", and they must have 1 or 3 channels.

Deep Learning Models

apply_deep_matching_3dapply_deep_matching_3dApplyDeepMatching3dApplyDeepMatching3dapply_deep_matching_3d uses deep learning technology for detecting the object instances. For an efficient execution, it is strongly recommended to use appropriate hardware accelerators and to optimize the deep learning models. See get_deep_matching_3d_paramget_deep_matching_3d_paramGetDeepMatching3dParamGetDeepMatching3dParamget_deep_matching_3d_param on how to obtain the deep learning models in order to set the device on which they are executed and optimize_dl_model_for_inferenceoptimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference for optimizing the models for a particular hardware.

Detection Steps

1. Object Detection

The object detection deep learning model is used to find instances of the target object in all images.

2. 3D pose estimation

The pose estimation deep learning model is used to estimate the 3D pose of all instances found in the previous step. Poses of the same object found in different images are combined into a single instance.

3. Pose Refinement

The poses found in the previous step are further refined using edges visible in the image. Additionally, their score is computed.

4. Filter Results

The detected instances are filtered using the minimum score ('min_score'"min_score""min_score""min_score""min_score"), the minimum number of cameras in which instances must be visible ('min_num_views'"min_num_views""min_num_views""min_num_views""min_num_views"), as well es the maximum number of instances to return ('num_matches'"num_matches""num_matches""num_matches""num_matches").

Result Format

The results are returned in DeepMatchingResultsDeepMatchingResultsDeepMatchingResultsdeepMatchingResultsdeep_matching_results as a dictionary. The dictionary key 'results'"results""results""results""results" contains all detected results. Each result has the following keys:

'score'"score""score""score""score":

The score of the result instance.

'pose'"pose""pose""pose""pose":

The pose of the result instance in world coordinate systems.

'cameras'"cameras""cameras""cameras""cameras":

A tuple of integers containing the camera indices in which the instance was detected in.

Execution Information

  • Multithreading type: reentrant (runs in parallel with non-exclusive operators).
  • Multithreading scope: global (may be called from any thread).
  • Automatically parallelized on internal data level.

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific operators even though the handle is used as an input parameter by those operators.

Parameters

ImagesImagesImagesimagesimages (input_object)  (multichannel-)image(-array) objectHImageHObjectHObjectHobject (byte / real)

Input images.

Deep3DMatchingModelDeep3DMatchingModelDeep3DMatchingModeldeep3DMatchingModeldeep_3dmatching_model (input_control)  deep_matching_3d HDeepMatching3D, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Deep 3D matching model.

DeepMatchingResultsDeepMatchingResultsDeepMatchingResultsdeepMatchingResultsdeep_matching_results (output_control)  dict-array HDict, HTupleSequence[HHandle]HTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Results.

Module

3D Metrology