Operator Reference

train_dl_model_batchT_train_dl_model_batchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch (Operator)

train_dl_model_batchT_train_dl_model_batchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch — Train a deep learning model.

Signature

train_dl_model_batch( : : DLModelHandle, DLSampleBatch : DLTrainResult)

Herror T_train_dl_model_batch(const Htuple DLModelHandle, const Htuple DLSampleBatch, Htuple* DLTrainResult)

void TrainDlModelBatch(const HTuple& DLModelHandle, const HTuple& DLSampleBatch, HTuple* DLTrainResult)

HDict HDlModel::TrainDlModelBatch(const HDictArray& DLSampleBatch) const

static void HOperatorSet.TrainDlModelBatch(HTuple DLModelHandle, HTuple DLSampleBatch, out HTuple DLTrainResult)

HDict HDlModel.TrainDlModelBatch(HDict[] DLSampleBatch)

def train_dl_model_batch(dlmodel_handle: HHandle, dlsample_batch: Sequence[HHandle]) -> HHandle

Description

The operator train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch performs a training step of the deep learning model contained in DLModelHandleDLModelHandleDLModelHandleDLModelHandledlmodel_handle. The current loss values are returned in the dictionary DLTrainResultDLTrainResultDLTrainResultDLTrainResultdltrain_result.

For DLModelHandleDLModelHandleDLModelHandleDLModelHandledlmodel_handle all model types but 'anomaly_detection'"anomaly_detection""anomaly_detection""anomaly_detection""anomaly_detection" and 'counting'"counting""counting""counting""counting" are valid. See train_dl_model_anomaly_datasettrain_dl_model_anomaly_datasetTrainDlModelAnomalyDatasetTrainDlModelAnomalyDatasettrain_dl_model_anomaly_dataset for the training of anomaly detection models.

A training step means here to perform a single update of the weights, based on the batch images given in DLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchdlsample_batch. The optimization algorithms which can be used are explained further in the subsection “Further Information on the Algorithms” below. For more information on how to train a network, please see the subchapter “The Network and the Training Process” in Deep Learning.

To successfully train the model, its applicable hyperparameters need to be set and the training data handed over according to the model requirements. For information to the hyperparameters, see the chapter of the corresponding model and the general chapter Deep Learning.

The training data consists of images and corresponding information. This operator expects one batch of training data, handed over in the tuple of dictionaries DLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchdlsample_batch. Such a DLSampleDLSampleDLSampleDLSampledlsample dictionary is created out of DLDataset for every image sample, e.g., by the procedure gen_dl_samples. See the chapter Deep Learning / Model for further information to the used dictionaries and their keys.

The number of images in a DLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchdlsample_batch tuple needs to be a multiple of the 'batch_size'"batch_size""batch_size""batch_size""batch_size". In particular on GPU the parameter 'batch_size'"batch_size""batch_size""batch_size""batch_size" is limited by the amount of available memory. In order to process more images in one training step, the model parameter 'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier" can be set to a value greater than 1. The number of DLSampleDLSampleDLSampleDLSampledlsample dictionaries being passed to the training operator needs to be equal to 'batch_size'"batch_size""batch_size""batch_size""batch_size" times 'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier". Note that a training step calculated for a batch and a 'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier" greater 1 is an approximation of a training step calculated for the same batch but with a 'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier" equal to 1 and an accordingly greater 'batch_size'"batch_size""batch_size""batch_size""batch_size". As an example, the loss calculated with a 'batch_size'"batch_size""batch_size""batch_size""batch_size" of 4 and a 'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier" of 2 is usually not equal to the loss calculated with a 'batch_size'"batch_size""batch_size""batch_size""batch_size" of 8 and a 'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier" of 1, although the same number of DLSampleDLSampleDLSampleDLSampledlsample dictionaries is used for training in both cases. However, the approximation generally delivers comparably good results, so it can be utilized if you wish to train with a larger number of images than your GPU allows. In some rare cases the approximation with a 'batch_size'"batch_size""batch_size""batch_size""batch_size" of 1 and an accordingly large 'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier" does not show the expected performance. Set the 'batch_size'"batch_size""batch_size""batch_size""batch_size" to a value greater than 1 can help to solve this issue.

In the output dictionary DLTrainResultDLTrainResultDLTrainResultDLTrainResultdltrain_result you get the current value of the total loss as the value for the key total_loss as well as the values for all other losses included in your model.

For models of 'type'"type""type""type""type" = 'detection'"detection""detection""detection""detection" such losses are e.g., the losses for the heads of every selected level, namely the 'Huber Loss' for the bounding box regression heads and the 'Focal Loss' for the classification heads (see also Deep Learning / Object Detection and Instance Segmentation as well as 'max_level'"max_level""max_level""max_level""max_level" and 'min_level'"min_level""min_level""min_level""min_level" in get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamget_dl_model_param).

Further Information on the Algorithms

During training, an optimization algorithm is applied with the goal to minimize the value of the total loss function. The latter one is determined based on the prediction of the neural network for the current batch of images.

In HALCON we have two optimization algorithms available so far, the SGD (stochastic gradient descent) and Adam (adaptive moment estimation).

SGD:

The SGD updates the layers' weights of the previous iteration , , to the new values at iteration as follows:

Here, is the learning rate, the momentum, the total loss, and the gradient of the total loss with respect to the weights. The variable is used to include the influence of the momentum .

Adam:

Like the SGD, Adam updates the layer's weights of the previous iteration but comes with an adaptive moment estimation to automatically estimate a scaling for the learning rate. In this way it is determined how fast the solver moves towards a minimum. The estimated moments are the first two moments of the weights' gradients which are the mean and the uncentered variance. To estimate the moments Adam uses exponentially moving averages and , computed on the gradient evaluated on a mini-batch. This results in the following formula:

is the weight's gradient on the current mini-batch, is the moment for the linear term, and is the moment for the quadratic term of the Adam solver. Furthermore, Adam has so-called bias correctors and . These values are computed as follows:

As a last step the moving averages are used to scale the learning rates individually for each parameter. To perform the weights update this results with as the model weights, and as the learning rate in the following formula.

Here is the parameter to ensure numeric stability. For a more detailed description we refer to the referenced paper.

The different models may have several losses implemented, which are summed up. To this sum the regularization term is added, which generally penalizes large weights, and together they form the total loss. The different types of losses are:

Huber Loss (model of 'type'"type""type""type""type"='detection'"detection""detection""detection""detection"):

The 'Huber Loss' is also known as 'Smooth L1 Loss'. The total 'Huber Loss' is the sum of the contributions from all bounding box variables of all found instances in the batch. For a single bounding box variable this contribution defined as follows: Thereby, denotes a bounding box variable and a parameter fixed to a value of 0.11.

We refer to create_dl_layer_loss_hubercreate_dl_layer_loss_huberCreateDlLayerLossHuberCreateDlLayerLossHubercreate_dl_layer_loss_huber for more information.

Focal Loss (model of 'type'"type""type""type""type"='detection'"detection""detection""detection""detection"):

The total 'Focal Loss' is the sum of the contributions from all found instance in the batch. For a single sample, this contribution is defined as follows: where is a parameter fixed to a value of 2. stands for the class specific weight ('class_weights') of the -th class and , are defined as Here, is a tuple of the model's estimated probabilities for each of the -many classes, and is a one-hot encoded target vector that encodes the class of the annotation.

We refer to create_dl_layer_loss_focalcreate_dl_layer_loss_focalCreateDlLayerLossFocalCreateDlLayerLossFocalcreate_dl_layer_loss_focal for more information.

Multinomial Logistic Loss (model of 'type'"type""type""type""type"= 'classification'"classification""classification""classification""classification", 'segmentation'"segmentation""segmentation""segmentation""segmentation"):

The 'Multinomial Logistic Loss' is also known as 'Cross Entropy Loss'. It is defined as follows:

Here, is the predicted result which depends on the network weights and the input batch . is a one-hot encoded target vector that encodes the label of the -th image of the batch containing -many images, and shall be understood to be a vector such that is applied on each component of . The value is a class specific weight for the class given by . This weight corresponds to the value set by 'class_weights' and is normalized by the sum over the weights for all classes in addition.

We refer to create_dl_layer_loss_cross_entropycreate_dl_layer_loss_cross_entropyCreateDlLayerLossCrossEntropyCreateDlLayerLossCrossEntropycreate_dl_layer_loss_cross_entropy for more information.

The regularization term is a weighted -norm involving all weights except for biases. Its influence can be controlled through . Latter one is the hyperparameter 'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior", which can be set with set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamset_dl_model_param. Here the index runs over all weights of the network, except for the biases which are not regularized. The regularization term generally penalizes large weights, thus pushing the weights towards zero, which effectively reduces the complexity of the model.

Attention

The operator train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch internally calls functions that might not be deterministic. Therefore, results from multiple calls of train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch can slightly differ, although the same input values have been used. Setting 'cudnn_deterministic'"cudnn_deterministic""cudnn_deterministic""cudnn_deterministic""cudnn_deterministic" of set_systemset_systemSetSystemSetSystemset_system may influence this behavior.

System requirements: Implementation on CPU is limited to specific platform types. To run this operator on GPU by setting 'runtime'"runtime""runtime""runtime""runtime" to 'gpu'"gpu""gpu""gpu""gpu" (see get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamget_dl_model_param), cuDNN and cuBLAS are required. Please refer to the “Installation Guide”, paragraph “Requirements for Deep Learning and Deep-Learning-Based Methods”, for the specific system requirements.

Execution Information

  • Multithreading type: reentrant (runs in parallel with non-exclusive operators).
  • Multithreading scope: global (may be called from any thread).
  • Automatically parallelized on internal data level.

Parameters

DLModelHandleDLModelHandleDLModelHandleDLModelHandledlmodel_handle (input_control)  dl_model HDlModel, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Deep learning model handle.

DLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchdlsample_batch (input_control)  dict-array HDict, HTupleSequence[HHandle]HTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Tuple of Dictionaries with input images and corresponding information.

DLTrainResultDLTrainResultDLTrainResultDLTrainResultdltrain_result (output_control)  dict HDict, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Dictionary with the train result data.

Result

If the parameters are valid, the operator train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch returns the value 2 ( H_MSG_TRUE) . If necessary, an exception is raised.

Possible Predecessors

read_dl_modelread_dl_modelReadDlModelReadDlModelread_dl_model, set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamset_dl_model_param, get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamget_dl_model_param

Possible Successors

apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelapply_dl_model

See also

apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelapply_dl_model

References

D. P. Kingma, J. Ba: "Adam: A method for Stochastic Optimization", 2014, https://arxiv.org/pdf/1412.6980.pdf

Module

Foundation. This operator uses dynamic licensing (see the 'Installation Guide'). Which of the following modules is required depends on the specific usage of the operator:
3D Metrology, OCR/OCV, Deep Learning Professional