Operator Reference
set_dl_classifier_param (Operator)
set_dl_classifier_param
— Set the parameters of a deep-learning-based classifier.
Warning
set_dl_classifier_param
is obsolete and is only provided for
reasons of backward compatibility.
The operator will be removed with HALCON 25.05.
New applications should use the common
CNN-based operator set_dl_model_param
instead.
Signature
set_dl_classifier_param( : : DLClassifierHandle, GenParamName, GenParamValue : )
Description
set_dl_classifier_param
sets the parameters and hyperparameters
GenParamName
of the neural network DLClassifierHandle
to the values GenParamValue
.
The pretrained classifiers are trained for their default image dimensions,
see read_dl_classifier
.
The network architectures allow different image dimensions. But for networks with at least one fully connected layer such a change makes a retraining necessary. Networks without fully connected layers are directly applicable to different image sizes. However, images with a size differing from the size with which the classifier has been trained, are likely to show a reduced classification accuracy.
GenParamName
can attain the following values:
- 'batch_size' :
-
Number of images (and corresponding
labels) in a batch that is transferred to device memory. The batch of
images which are processed simultaneously in a single training iteration
contains a number of images which is equal to 'batch_size'
times 'batch_size_multiplier' . Please refer to
train_dl_classifier_batch
for further details. The parameter 'batch_size' is stored in the pretrained classifier. Per default, the 'batch_size' is set such that a training of the pretrained classifier with up to100
classes can be easily performed on a device with8
gigabyte of memory. For the pretrained classifiers, the default values are hence given as follows:For inference, the 'batch_size' can be generally set independently from the number of input images. Seepretrained classifier default value of 'batch_size' 'pretrained_dl_classifier_compact.hdl' 160 'pretrained_dl_classifier_enhanced.hdl' 96 'pretrained_dl_classifier_resnet18.hdl' 24 'pretrained_dl_classifier_resnet50.hdl' 23 apply_dl_classifier
for details on how to set this parameter for greater efficiency. - 'batch_size_multiplier' :
-
Multiplier for 'batch_size' to enable training with larger numbers of images in one step which would otherwise not be possible due to GPU memory limitations. For detailed information see
train_dl_classifier_batch
. This model parameter does not have any impact during evaluation and inference. For the pretrained classifiers, the default value of 'batch_size_multiplier' is set to 1. - 'classes' :
-
Tuple of labels corresponding to the classes of objects which are to be recognized. The order of the class names remains unchanged after the setting.
- 'gpu' :
-
Identifier of the GPU where the training and inference operators (
train_dl_classifier_batch
andapply_dl_classifier
) are executed. Per default, the first available GPU is used.get_system
with 'cuda_devices' can be used to retrieve a list of available GPUs. Pass the index in this list to 'gpu' . - 'image_width' :
-
Width of the images the network will process. The default value is given by the network, see
read_dl_classifier
. - 'image_height' :
-
Height of the images the network will process. The default value is given by the network, see
read_dl_classifier
. - 'image_num_channels' :
-
Number of channels of the images the network will process. Possible are one channel (gray value image), or three channels (three-channel image). The default value is given by the network, see
read_dl_classifier
. Changing to a single channel image modifies the network configuration. This process removes the color information contained in certain layers and is not invertible. - 'image_dimensions' :
-
Tuple containing the image dimensions 'image_width' , 'image_height' , and number of channels 'image_num_channels' . The default values are given by the network, see
read_dl_classifier
. Concerning the number of channels, the values one (gray value image), or three (three-channel image) are possible. Changing to a single channel image modifies the network configuration. This process removes the color information contained in certain layers and is not invertible. - 'learning_rate' :
-
Initial value of the factor determining the gradient influence during training. Please refer to
train_dl_classifier_batch
for further details. The default value depends on the classifier. - 'momentum' :
-
When updating the arguments of the loss function, the hyperparameter 'momentum' specifies to which extent previous updating vectors will be added to the current updating vector. Please refer to
train_dl_classifier_batch
for further details. Per default, the 'momentum' is set to 0.9. - 'runtime' :
-
Defines the device on which the operators will be executed. Per default, the 'runtime' is set to 'gpu' .
- 'cpu' :
-
The operator
apply_dl_classifier
will be executed on the CPU, whereas the operatortrain_dl_classifier_batch
is not executable.In case the GPU has been used before, CPU memory is initialized, and if necessary values stored on the GPU memory are moved to the CPU memory.
On Intel or AMD architectures the 'cpu' runtime uses OpenMP for the parallelization of
apply_dl_classifier
, where per default, all threads available to the OpenMP runtime are used. You may use theset_system
parameter 'thread_num' to specify the number of threads.On Arm architectures the 'cpu' runtime uses a global thread pool. You may specify the number of threads with the
set_system
parameter 'thread_num' .For both architectures mentioned above, it is not possible to specify a thread-specific number of threads (via the parameter 'tsp_thread_num' of the operator
set_system
). - 'gpu' :
-
The GPU memory is initialized and the corresponding handle created. The operators
apply_dl_classifier
andtrain_dl_classifier_batch
will be executed on the GPU. For the specific requirements please refer to the HALCON“Installation Guide”
.
- 'runtime_init' :
-
If called with 'immediately' , the GPU memory is initialized and the corresponding handle created. Otherwise this is done on demand, which may result in significantly larger execution times for the first call of
apply_dl_classifier
ortrain_dl_classifier_batch
. If 'gpu' or 'batch_size' is changed with subsequent calls ofset_dl_classifier_param
, the GPU memory is reinitialized.Note, this parameter has no effect if running on CPUs, thus if 'runtime' is set to 'cpu' .
- 'weight_prior' :
-
Regularization parameter used for regularization of the loss function. Regularization is helpful in the presence of overfitting during the classifier training. If the hyperparameter 'weight_prior' is non-zero, the regularization term given below is added to the loss function (see also
train_dl_classifier_batch
) Here the index k runs over all weights of the network, except for the biases which are not regularized. The regularization term generally penalizes large weights, thus pushing the weights towards zero, which effectively reduces the complexity of the model. Simply put: Regularization favors simpler models that are less likely to learn noise in the data and generalize better. In case the classifier overfits the data, it is strongly recommended to try different values for the parameter 'weight_prior' to improve the generalization properties of the neural network. Choosing its value is a trade-off between the models ability to generalize, overfitting, and underfitting. If is too small, the model might overfit, if its too large the model might loose its ability to fit the data, because all weights are effectively zero. For finding an ideal value for , we recommend a cross-validation, i.e. to perform the training for a range of values and choose the value that results in the best validation error. For typical applications, we recommend testing the values for 'weight_prior' on a logarithmic scale between . If the training takes a very long time, one might consider performing the hyperparameter optimization on a reduced amount of data. The default value depends on the classifier.
For an explanation of the concept of deep-learning-based classification see the introduction of chapter Deep Learning / Classification. The workflow involving this legacy operator is described in the chapter Legacy / DL Classification.
Attention
To successfully set 'gpu' parameters, cuDNN and cuBLAS are
required, i.e., to set the parameter GenParamName
'runtime' to 'gpu' or to set the GenParamName
'gpu' .
For further details, please refer to the “Installation Guide”
,
paragraph “Requirements for Deep Learning and Deep-Learning-Based Methods”.
Execution Information
- Multithreading type: reentrant (runs in parallel with non-exclusive operators).
- Multithreading scope: global (may be called from any thread).
- Processed without parallelization.
Parameters
DLClassifierHandle
(input_control) dl_classifier →
(handle)
Handle of the deep-learning-based classifier.
GenParamName
(input_control) attribute.name(-array) →
(string)
Name of the generic parameter.
Default: 'classes'
List of values: 'batch_size' , 'batch_size_multiplier' , 'classes' , 'gpu' , 'image_dimensions' , 'image_height' , 'image_num_channels' , 'image_width' , 'learning_rate' , 'momentum' , 'runtime' , 'runtime_init' , 'weight_prior'
GenParamValue
(input_control) attribute.value(-array) →
(string / real / integer)
Value of the generic parameter.
Default: ['class_1','class_2','class_3']
Suggested values: 1, 2, 3, 50, 0.001, 'cpu' , 'gpu' , 'immediately'
Result
If the parameters are valid, the operator
set_dl_classifier_param
returns the value 2 (
H_MSG_TRUE)
. If
necessary, an exception is raised.
Possible Predecessors
Possible Successors
get_dl_classifier_param
,
apply_dl_classifier
,
train_dl_classifier_batch
Alternatives
See also
Module
Deep Learning Enhanced