HALCON Operator reference
create_dl_layer_activation (Operator)
create_dl_layer_activation — Create an activation layer.
Signature
create_dl_layer_activation( : : DLLayerInput, LayerName, ActivationType, GenParamName, GenParamValue : DLLayerActivation)
Description
The operator create_dl_layer_activation creates an activation layer
whose handle is returned in DLLayerActivation.
The parameter DLLayerInput determines the feeding input layer and
expects the layer handle as value.
The parameter LayerName sets an individual layer name.
Note that if creating a model using create_dl_model each layer of
the created network must have a unique name.
The parameter ActivationType sets the type of the activation.
Every activation mode defines a pointwise function.
Supported activation types are:
- 'abs':
Absolute value.
- 'acos':
Arccosine activation.
- 'asin':
Arcsine activation.
- 'atan':
Arctangent activation.
- 'ceil':
Rounds the input up to the nearest integer.
- 'celu':
Continuously differentiable exponential linear unit (Celu) activation, which is defined as follows: Setting the generic parameter 'alpha' determines the value (default: 1.0). For a Celu activation is identical to an ELU.
- 'clip':
Clip the input to a given interval: Setting the generic parameters 'min' and 'max' determines the values and , respectively.
- 'cos':
Cosine activation.
- 'cosh':
Hyperbolic cosine activation.
- 'elu':
Exponential linear unit (ELU) activation, which is defined as follows: Setting the generic parameter 'alpha' determines the value (default: 1.0).
- 'erf':
Gauss error function, which is defined as follows:
- 'exp':
Exponential function:
- 'floor':
Rounds the input down to the nearest integer.
- 'gelu':
Gaussian error linear unit (Gelu) activation, which is defined as follows: Setting the generic parameter 'approximate' to 'tanh' determines whether an approximate function estimation is used:
- 'hard_sigmoid':
HardSigmoid activation: Setting the generic parameters 'alpha' and 'beta' determines the values (default: 0.2) and (default: 0.5).
- 'hard_swish':
HardSwish activation: with fixed and .
- 'log':
Natural logarithm:
- 'mish':
Mish activation:
- 'neg':
Negative of input:
- 'pow':
Power function: Setting the generic parameter 'exponent' determines the value .
- 'reciprocal':
Reciprocal function:
- 'relu':
-
Rectified linear unit (ReLU) activation. By setting a specific ReLU parameter, another type can be specified instead of the standard ReLU:
-
Standard ReLU, defined as follows:
-
Bounded ReLU, defined as follows: Setting the generic parameter 'upper_bound' will result in a bounded ReLU and determines the value of .
-
Leaky ReLU, defined as follows: Setting the generic parameter 'alpha' results in a leaky ReLU and determines the value . When the generic parameter 'upper_bound' is set, 'alpha' will be ignored and the result will be a bounded ReLU.
-
- 'round':
Rounds the input to the nearest integer. Values with '.5' are rounded to the nearest even integer, e.g.: , .
- 'sigmoid':
Sigmoid activation, which is defined as follows:
- 'sin':
Sine activation.
- 'sinh':
Hyperbolic sine activation.
- 'softplus':
-
Softplus activation function, which is defined as follows:
- 'softsign':
Softsign activation function, which is defined as follows:
- 'sqrt':
Square root of the input:
- 'swish':
Swish activation function, which is defined as follows: Setting the generic parameter 'alpha' determines the value (default: 1.0).
- 'tan':
Tangent activation.
- 'tanh':
Tanh activation, which is defined as follows:
- 'thresholded_relu':
Thresholded ReLU, defined as follows: Setting the generic parameter 'alpha' determines the value (default: 1.0).
The following generic parameters GenParamName and the corresponding
values GenParamValue are supported:
- 'is_inference_output':
-
Determines whether
apply_dl_modelwill include the output of this layer in the dictionaryDLResultBatcheven without specifying this layer inOutputs('true') or not ('false').Default: 'false'
- 'upper_bound' ('relu'):
-
Float value defining an upper bound for a rectified linear unit. If the activation layer is part of a model which has been created using
create_dl_model, the upper bound can be unset. To do so, useset_dl_model_layer_paramand set an empty tuple for 'upper_bound'.Default: []
- 'alpha' ('relu', 'elu', 'celu', 'thresholded_relu', 'swish', 'hard_sigmoid'):
-
Float value defining the alpha parameter of a leaky ReLU, thresholded ReLU, ELU, CELU, Swish or HardSigmoid-activation.
Restriction: The value of 'alpha' must be positive or zero for all activations except for
ActivationType'thresholded_relu'. This parameter is incompatible with and overridden by 'upper_bound' forActivationType'relu'.Default: 0.2 for 'hard_sigmoid', else 1.0
- 'beta' ('hard_sigmoid'):
-
Float value defining the beta parameter of a HardSigmoid activation.
Default: 0.5
- 'min' ('clip'):
-
Float value defining the min parameter of a clip activation.
- 'max' ('clip'):
-
Float value defining the max parameter of a clip activation.
- 'approximate' ('gelu'):
-
This value determines whether an approximate function estimation is used.
List of values: 'tanh', 'false'
Default: 'false'
- 'exponent' ('pow'):
-
Float value defining the exponent parameter of a pow activation.
Certain parameters of layers created using this operator
create_dl_layer_activation can be set and retrieved using
further operators.
The following tables give an overview, which parameters can be set
using set_dl_model_layer_param and which ones can be retrieved
using get_dl_model_layer_param or get_dl_layer_param. Note, the
operators set_dl_model_layer_param and get_dl_model_layer_param
require a model created by create_dl_model.
| Layer Parameters | set |
get |
|---|---|---|
'activation_type' (ActivationType) |
x |
x
|
'input_layer' (DLLayerInput) |
x
|
|
'name' (LayerName) |
x |
x
|
'output_layer' (DLLayerActivation) |
x
|
|
| 'shape' | x
|
|
| 'type' | x
|
| Generic Layer Parameters | set |
get |
|---|---|---|
| 'is_inference_output' | x |
x
|
| 'num_trainable_params' | x
|
|
| 'alpha' | x |
x
|
| 'beta' | x |
x
|
| 'upper_bound' | x |
x
|
| 'min' | x |
x
|
| 'max' | x |
x
|
| 'approximate' | x |
x
|
| 'exponent' | x |
x
|
Execution Information
- Multithreading type: reentrant (runs in parallel with non-exclusive operators).
- Multithreading scope: global (may be called from any thread).
- Processed without parallelization.
Parameters
DLLayerInput (input_control) dl_layer → (handle)
Feeding layer.
LayerName (input_control) string → (string)
Name of the output layer.
ActivationType (input_control) string → (string)
Activation type.
Default: 'relu'
List of values: 'abs', 'acos', 'asin', 'atan', 'ceil', 'celu', 'clip', 'cos', 'cosh', 'elu', 'erf', 'exp', 'floor', 'gelu', 'hard_sigmoid', 'hard_swish', 'log', 'mish', 'neg', 'pow', 'reciprocal', 'relu', 'round', 'sigmoid', 'sin', 'sinh', 'softplus', 'softsign', 'sqrt', 'swish', 'tan', 'tanh', 'thresholded_relu'
GenParamName (input_control) attribute.name(-array) → (string)
Generic input parameter names.
Default: []
List of values: 'alpha', 'approximate', 'beta', 'exponent', 'is_inference_output', 'max', 'min', 'upper_bound'
GenParamValue (input_control) attribute.value(-array) → (string / integer / real)
Generic input parameter values.
Default: []
Suggested values: 'true', 'false', 'tanh'
DLLayerActivation (output_control) dl_layer → (handle)
Activation layer.
Module
Deep Learning Professional