Operator Reference

query_available_dl_devicesT_query_available_dl_devicesQueryAvailableDlDevicesQueryAvailableDlDevicesquery_available_dl_devices (Operator)

query_available_dl_devicesT_query_available_dl_devicesQueryAvailableDlDevicesQueryAvailableDlDevicesquery_available_dl_devices — Get list of deep-learning-capable hardware devices.

Signature

query_available_dl_devices( : : GenParamName, GenParamValue : DLDeviceHandles)

Herror T_query_available_dl_devices(const Htuple GenParamName, const Htuple GenParamValue, Htuple* DLDeviceHandles)

void QueryAvailableDlDevices(const HTuple& GenParamName, const HTuple& GenParamValue, HTuple* DLDeviceHandles)

static HDlDeviceArray HDlDevice::QueryAvailableDlDevices(const HTuple& GenParamName, const HTuple& GenParamValue)

void HDlDevice::QueryAvailableDlDevices(const HString& GenParamName, const HString& GenParamValue)

void HDlDevice::QueryAvailableDlDevices(const char* GenParamName, const char* GenParamValue)

void HDlDevice::QueryAvailableDlDevices(const wchar_t* GenParamName, const wchar_t* GenParamValue)   ( Windows only)

static void HOperatorSet.QueryAvailableDlDevices(HTuple genParamName, HTuple genParamValue, out HTuple DLDeviceHandles)

static HDlDevice[] HDlDevice.QueryAvailableDlDevices(HTuple genParamName, HTuple genParamValue)

void HDlDevice.QueryAvailableDlDevices(string genParamName, string genParamValue)

def query_available_dl_devices(gen_param_name: MaybeSequence[str], gen_param_value: MaybeSequence[Union[int, float, str]]) -> Sequence[HHandle]

def query_available_dl_devices_s(gen_param_name: MaybeSequence[str], gen_param_value: MaybeSequence[Union[int, float, str]]) -> HHandle

Description

query_available_dl_devicesquery_available_dl_devicesQueryAvailableDlDevicesQueryAvailableDlDevicesquery_available_dl_devices returns a list of handles. Each handle refers to a deep-learning-capable hardware device (hereafter referred to as device) that can be used for inference or training of a deep learning model. For each returned device, every parameter mentioned in GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name must be equal to at least one of its corresponding values that appear in GenParamValueGenParamValueGenParamValuegenParamValuegen_param_value. A parameter can have more than one value by duplicating its name in GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name and adding different corresponding value in GenParamValueGenParamValueGenParamValuegenParamValuegen_param_value.

A deep-learning-capable device is either supported directly through HALCON or through an AI 2-interface.

The devices that are supported directly through HALCON are equivalent to those that can be set to a deep learning model via set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamset_dl_model_param using 'runtime'"runtime""runtime""runtime""runtime" = 'cpu'"cpu""cpu""cpu""cpu" or 'runtime'"runtime""runtime""runtime""runtime" = 'gpu'"gpu""gpu""gpu""gpu". HALCON provides an internal implementation for the inference or training of a deep learning model for those devices. See Deep Learning for more details.

Devices that are supported through the AI 2-interface can also be set to a deep learning model using set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamset_dl_model_param. In this case the inference is not executed by HALCON but by the device itself.

query_available_dl_devicesquery_available_dl_devicesQueryAvailableDlDevicesQueryAvailableDlDevicesquery_available_dl_devices returns a handle for each deep-learning-capable device supported through HALCON and through an inference engine.

If a device is supported through HALCON and one or several inference engines, query_available_dl_devicesquery_available_dl_devicesQueryAvailableDlDevicesQueryAvailableDlDevicesquery_available_dl_devices returns a handle for HALCON and for each inference engine.

GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name can be used to filter for the devices. All GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name that are gettable by get_dl_device_paramget_dl_device_paramGetDlDeviceParamGetDlDeviceParamget_dl_device_param and that do not return a handle-typed value for GenParamValueGenParamValueGenParamValuegenParamValuegen_param_value are supported for filtering. See the operator reference of get_dl_device_paramget_dl_device_paramGetDlDeviceParamGetDlDeviceParamget_dl_device_param for the list of gettable parameters. In addition, the following values are supported:

'runtime'"runtime""runtime""runtime""runtime":

The devices, which are directly supported by HALCON for this device type.

List of values: 'cpu'"cpu""cpu""cpu""cpu", 'gpu'"gpu""gpu""gpu""gpu".

GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name can have multiple entries for the same value. In this case filter combines the entries with a logical 'or'. Please see the example code below for some examples how to use the filter.

Execution Information

  • Multithreading type: reentrant (runs in parallel with non-exclusive operators).
  • Multithreading scope: global (may be called from any thread).
  • Processed without parallelization.

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific operators even though the handle is used as an input parameter by those operators.

Parameters

GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name (input_control)  attribute.name(-array) HTupleMaybeSequence[str]HTupleHtuple (string) (string) (HString) (char*)

Name of the generic parameter.

Default: []

List of values: 'ai_accelerator_interface'"ai_accelerator_interface""ai_accelerator_interface""ai_accelerator_interface""ai_accelerator_interface", 'calibration_precisions'"calibration_precisions""calibration_precisions""calibration_precisions""calibration_precisions", 'cast_precisions'"cast_precisions""cast_precisions""cast_precisions""cast_precisions", 'conversion_supported'"conversion_supported""conversion_supported""conversion_supported""conversion_supported", 'id'"id""id""id""id", 'inference_only'"inference_only""inference_only""inference_only""inference_only", 'name'"name""name""name""name", 'optimize_for_inference_params'"optimize_for_inference_params""optimize_for_inference_params""optimize_for_inference_params""optimize_for_inference_params", 'precisions'"precisions""precisions""precisions""precisions", 'runtime'"runtime""runtime""runtime""runtime", 'settable_device_params'"settable_device_params""settable_device_params""settable_device_params""settable_device_params", 'type'"type""type""type""type"

GenParamValueGenParamValueGenParamValuegenParamValuegen_param_value (input_control)  attribute.value(-array) HTupleMaybeSequence[Union[int, float, str]]HTupleHtuple (string / integer / real) (string / int / long / double) (HString / Hlong / double) (char* / Hlong / double)

Value of the generic parameter.

Default: []

Suggested values:

DLDeviceHandlesDLDeviceHandlesDLDeviceHandlesDLDeviceHandlesdldevice_handles (output_control)  dl_device(-array) HDlDevice, HTupleSequence[HHandle]HTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Tuple of DLDevice handles

Example (HDevelop)

* Query all deep-learning-capable hardware devices
query_available_dl_devices ([], [], DLDeviceHandles)

* Query all GPUs with ID 0 or 2
query_available_dl_devices (['type', 'id', 'id'], ['gpu', 0, 2],\
                            DLDeviceHandles)

* Query the unique GPU with ID 1 supported by HALCON
query_available_dl_devices (['runtime', 'id'], ['gpu', 1], DLDeviceHandles)

Result

If the parameters are valid, the operator query_available_dl_devicesquery_available_dl_devicesQueryAvailableDlDevicesQueryAvailableDlDevicesquery_available_dl_devices returns the value 2 ( H_MSG_TRUE) . If necessary, an exception is raised.

Possible Successors

get_dl_device_paramget_dl_device_paramGetDlDeviceParamGetDlDeviceParamget_dl_device_param

Module

Foundation