Operator Reference

create_class_mlpT_create_class_mlpCreateClassMlpCreateClassMlpcreate_class_mlp (Operator)

create_class_mlpT_create_class_mlpCreateClassMlpCreateClassMlpcreate_class_mlp — Create a multilayer perceptron for classification or regression.

Signature

Herror T_create_class_mlp(const Htuple NumInput, const Htuple NumHidden, const Htuple NumOutput, const Htuple OutputFunction, const Htuple Preprocessing, const Htuple NumComponents, const Htuple RandSeed, Htuple* MLPHandle)

void CreateClassMlp(const HTuple& NumInput, const HTuple& NumHidden, const HTuple& NumOutput, const HTuple& OutputFunction, const HTuple& Preprocessing, const HTuple& NumComponents, const HTuple& RandSeed, HTuple* MLPHandle)

void HClassMlp::HClassMlp(Hlong NumInput, Hlong NumHidden, Hlong NumOutput, const HString& OutputFunction, const HString& Preprocessing, Hlong NumComponents, Hlong RandSeed)

void HClassMlp::HClassMlp(Hlong NumInput, Hlong NumHidden, Hlong NumOutput, const char* OutputFunction, const char* Preprocessing, Hlong NumComponents, Hlong RandSeed)

void HClassMlp::HClassMlp(Hlong NumInput, Hlong NumHidden, Hlong NumOutput, const wchar_t* OutputFunction, const wchar_t* Preprocessing, Hlong NumComponents, Hlong RandSeed)   ( Windows only)

void HClassMlp::CreateClassMlp(Hlong NumInput, Hlong NumHidden, Hlong NumOutput, const HString& OutputFunction, const HString& Preprocessing, Hlong NumComponents, Hlong RandSeed)

void HClassMlp::CreateClassMlp(Hlong NumInput, Hlong NumHidden, Hlong NumOutput, const char* OutputFunction, const char* Preprocessing, Hlong NumComponents, Hlong RandSeed)

void HClassMlp::CreateClassMlp(Hlong NumInput, Hlong NumHidden, Hlong NumOutput, const wchar_t* OutputFunction, const wchar_t* Preprocessing, Hlong NumComponents, Hlong RandSeed)   ( Windows only)

def create_class_mlp(num_input: int, num_hidden: int, num_output: int, output_function: str, preprocessing: str, num_components: int, rand_seed: int) -> HHandle

Description

create_class_mlpcreate_class_mlpCreateClassMlpCreateClassMlpcreate_class_mlp creates a neural net in the form of a multilayer perceptron (MLP), which can be used for classification or regression (function approximation), depending on how OutputFunctionOutputFunctionOutputFunctionoutputFunctionoutput_function is set. The MLP consists of three layers: an input layer with NumInputNumInputNumInputnumInputnum_input input variables (units, neurons), a hidden layer with NumHiddenNumHiddenNumHiddennumHiddennum_hidden units, and an output layer with NumOutputNumOutputNumOutputnumOutputnum_output output variables. The MLP performs the following steps to calculate the activations of the hidden units from the input data (the so-called feature vector): Here, the matrix and the vector are the weights of the input layer (first layer) of the MLP. In the hidden layer (second layer), the activations are transformed in a first step by using linear combinations of the variables in an analogous manner as above: Here, the matrix and the vector are the weights of the second layer of the MLP.

The activation function used in the output layer can be determined by setting OutputFunctionOutputFunctionOutputFunctionoutputFunctionoutput_function. For OutputFunctionOutputFunctionOutputFunctionoutputFunctionoutput_function = 'linear'"linear""linear""linear""linear", the data are simply copied: This type of activation function should be used for regression problems (function approximation). This activation function is not suited for classification problems.

For OutputFunctionOutputFunctionOutputFunctionoutputFunctionoutput_function = 'logistic'"logistic""logistic""logistic""logistic", the activations are computed as follows: This type of activation function should be used for classification problems with multiple (NumOutputNumOutputNumOutputnumOutputnum_output) independent logical attributes as output. This kind of classification problem is relatively rare in practice.

For OutputFunctionOutputFunctionOutputFunctionoutputFunctionoutput_function = 'softmax'"softmax""softmax""softmax""softmax", the activations are computed as follows:

This type of activation function should be used for common classification problems with multiple (NumOutputNumOutputNumOutputnumOutputnum_output) mutually exclusive classes as output. In particular, OutputFunctionOutputFunctionOutputFunctionoutputFunctionoutput_function = 'softmax'"softmax""softmax""softmax""softmax" must be used for the classification of pixel data with classify_image_class_mlpclassify_image_class_mlpClassifyImageClassMlpClassifyImageClassMlpclassify_image_class_mlp.

The parameters PreprocessingPreprocessingPreprocessingpreprocessingpreprocessing and NumComponentsNumComponentsNumComponentsnumComponentsnum_components can be used to specify a preprocessing of the feature vectors. For PreprocessingPreprocessingPreprocessingpreprocessingpreprocessing = 'none'"none""none""none""none", the feature vectors are passed unaltered to the MLP. NumComponentsNumComponentsNumComponentsnumComponentsnum_components is ignored in this case.

For all other values of PreprocessingPreprocessingPreprocessingpreprocessingpreprocessing, the training data set is used to compute a transformation of the feature vectors during the training as well as later in the classification or evaluation.

For PreprocessingPreprocessingPreprocessingpreprocessingpreprocessing = 'normalization'"normalization""normalization""normalization""normalization", the feature vectors are normalized by subtracting the mean of the training vectors and dividing the result by the standard deviation of the individual components of the training vectors. Hence, the transformed feature vectors have a mean of 0 and a standard deviation of 1. The normalization does not change the length of the feature vector. NumComponentsNumComponentsNumComponentsnumComponentsnum_components is ignored in this case. This transformation can be used if the mean and standard deviation of the feature vectors differs substantially from 0 and 1, respectively, or for data in which the components of the feature vectors are measured in different units (e.g., if some of the data are gray value features and some are region features, or if region features are mixed, e.g., 'circularity' (unit: scalar) and 'area' (unit: pixel squared)). In these cases, the training of the net will typically require fewer iterations than without normalization.

For PreprocessingPreprocessingPreprocessingpreprocessingpreprocessing = 'principal_components'"principal_components""principal_components""principal_components""principal_components", a principal component analysis is performed. First, the feature vectors are normalized (see above). Then, an orthogonal transformation (a rotation in the feature space) that decorrelates the training vectors is computed. After the transformation, the mean of the training vectors is 0 and the covariance matrix of the training vectors is a diagonal matrix. The transformation is chosen such that the transformed features that contain the most variation is contained in the first components of the transformed feature vector. With this, it is possible to omit the transformed features in the last components of the feature vector, which typically are mainly influenced by noise, without losing a large amount of information. The parameter NumComponentsNumComponentsNumComponentsnumComponentsnum_components can be used to determine how many of the transformed feature vector components should be used. Up to NumInputNumInputNumInputnumInputnum_input components can be selected. The operator get_prep_info_class_mlpget_prep_info_class_mlpGetPrepInfoClassMlpGetPrepInfoClassMlpget_prep_info_class_mlp can be used to determine how much information each transformed component contains. Hence, it aids the selection of NumComponentsNumComponentsNumComponentsnumComponentsnum_components. Like data normalization, this transformation can be used if the mean and standard deviation of the feature vectors differs substantially from 0 and 1, respectively, or for feature vectors in which the components of the data are measured in different units. In addition, this transformation is useful if it can be expected that the features are highly correlated.

In contrast to the above three transformations, which can be used for all MLP types, the transformation specified by PreprocessingPreprocessingPreprocessingpreprocessingpreprocessing = 'canonical_variates'"canonical_variates""canonical_variates""canonical_variates""canonical_variates" can only be used if the MLP is used as a classifier with OutputFunctionOutputFunctionOutputFunctionoutputFunctionoutput_function = 'softmax'"softmax""softmax""softmax""softmax"). The computation of the canonical variates is also called linear discriminant analysis. In this case, a transformation that first normalizes the training vectors and then decorrelates the training vectors on average over all classes is computed. At the same time, the transformation maximally separates the mean values of the individual classes. As for PreprocessingPreprocessingPreprocessingpreprocessingpreprocessing = 'principal_components'"principal_components""principal_components""principal_components""principal_components", the transformed components are sorted by information content, and hence transformed components with little information content can be omitted. For canonical variates, up to min(NumOutputNumOutputNumOutputnumOutputnum_output - 1, NumInputNumInputNumInputnumInputnum_input) components can be selected. Also in this case, the information content of the transformed components can be determined with get_prep_info_class_mlpget_prep_info_class_mlpGetPrepInfoClassMlpGetPrepInfoClassMlpget_prep_info_class_mlp. Like principal component analysis, canonical variates can be used to reduce the amount of data without losing a large amount of information, while additionally optimizing the separability of the classes after the data reduction.

For the last two types of transformations ('principal_components'"principal_components""principal_components""principal_components""principal_components" and 'canonical_variates'"canonical_variates""canonical_variates""canonical_variates""canonical_variates"), the actual number of input units of the MLP is determined by NumComponentsNumComponentsNumComponentsnumComponentsnum_components, whereas NumInputNumInputNumInputnumInputnum_input determines the dimensionality of the input data (i.e., the length of the untransformed feature vector). Hence, by using one of these two transformations, the number of input variables, and thus usually also the number of hidden units can be reduced. With this, the time needed to train the MLP and to evaluate and classify a feature vector is typically reduced.

Usually, NumHiddenNumHiddenNumHiddennumHiddennum_hidden should be selected in the order of magnitude of NumInputNumInputNumInputnumInputnum_input and NumOutputNumOutputNumOutputnumOutputnum_output. In many cases, much smaller values of NumHiddenNumHiddenNumHiddennumHiddennum_hidden already lead to very good classification results. If NumHiddenNumHiddenNumHiddennumHiddennum_hidden is chosen too large, the MLP may overfit the training data, which typically leads to bad generalization properties, i.e., the MLP learns the training data very well, but does not return very good results on unknown data.

create_class_mlpcreate_class_mlpCreateClassMlpCreateClassMlpcreate_class_mlp initializes the above described weights with random numbers. To ensure that the results of training the classifier with train_class_mlptrain_class_mlpTrainClassMlpTrainClassMlptrain_class_mlp are reproducible, the seed value of the random number generator is passed in RandSeedRandSeedRandSeedrandSeedrand_seed. If the training results in a relatively large error, it sometimes may be possible to achieve a smaller error by selecting a different value for RandSeedRandSeedRandSeedrandSeedrand_seed and retraining an MLP.

After the MLP has been created, typically training samples are added to the MLP by repeatedly calling add_sample_class_mlpadd_sample_class_mlpAddSampleClassMlpAddSampleClassMlpadd_sample_class_mlp or read_samples_class_mlpread_samples_class_mlpReadSamplesClassMlpReadSamplesClassMlpread_samples_class_mlp. After this, the MLP is typically trained using train_class_mlptrain_class_mlpTrainClassMlpTrainClassMlptrain_class_mlp. Hereafter, the MLP can be saved using write_class_mlpwrite_class_mlpWriteClassMlpWriteClassMlpwrite_class_mlp. Alternatively, the MLP can be used immediately after training to evaluate data using evaluate_class_mlpevaluate_class_mlpEvaluateClassMlpEvaluateClassMlpevaluate_class_mlp or, if the MLP is used as a classifier (i.e., for OutputFunctionOutputFunctionOutputFunctionoutputFunctionoutput_function = 'softmax'"softmax""softmax""softmax""softmax"), to classify data using classify_class_mlpclassify_class_mlpClassifyClassMlpClassifyClassMlpclassify_class_mlp.

The training of the MLP will usually result in very sharp boundaries between the different classes, i.e., the confidence for one class will drop from close to 1 (within the region of the class) to close to 0 (within the region of a different class) within a very narrow “band” in the feature space. If the classes do not overlap, this transition happens at a suitable location between the classes; if the classes overlap, the transition happens at a suitable location within the overlapping area. While this sharp transition is desirable in many applications, in some applications a smoother transition between different classes (i.e., a transition within a wider “band” in the feature space) is desirable to reflect a level of uncertainty within the region in the feature space between the classes. Furthermore, as described above, it may be desirable to prevent overfitting of the MLP to the training data. For these purposes, the MLP can be regularized by using set_regularization_params_class_mlpset_regularization_params_class_mlpSetRegularizationParamsClassMlpSetRegularizationParamsClassMlpset_regularization_params_class_mlp.

An MLP, as defined above, has no inherent capability for novelty detection, i.e., it will classify a random feature vector into one of the classes with a confidence close to 1 (unless the random feature vector happens to lie in a region of the feature space in which the training samples of different classes overlap). In some applications, however, it is desirable to reject feature vectors that do not lie close to any class, where “closesness” defined by the proximity of the feature vector to the collection of feature vectors in the training set. To provide an MLP with the ability for novelty detection, i.e., to reject feature vectors that do not belong to any class, an explicit rejection class can be created by setting NumOutputNumOutputNumOutputnumOutputnum_output to the number of actual classes plus 1. Then, set_rejection_params_class_mlpset_rejection_params_class_mlpSetRejectionParamsClassMlpSetRejectionParamsClassMlpset_rejection_params_class_mlp can be used to configure train_class_mlptrain_class_mlpTrainClassMlpTrainClassMlptrain_class_mlp to automatically generate samples for this rejection class.

The combination of regularization and an automatic generation of a rejection class is useful in many applications since it provides a smooth transition between the actual classes and from the actual classes to the rejection class. This reflects the requirement of these applications that only feature vectors within the area of the feature space that corresponds to the training samples of each class should have a confidence close to 1, whereas random feature vectors not belonging to any class should have a confidence close to 0, and that transitions between the classes should be smooth, reflecting a growing degree of uncertainty the farther a feature vector lies from the respective class. In particular, OCR applications sometimes have this requirement (see create_ocr_class_mlpcreate_ocr_class_mlpCreateOcrClassMlpCreateOcrClassMlpcreate_ocr_class_mlp).

A comparison of the MLP and the support vector machine (SVM) (see create_class_svmcreate_class_svmCreateClassSvmCreateClassSvmcreate_class_svm) typically shows that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition rates than MLPs. The MLP is faster at classification and should therefore be preferred in time critical applications. Please note that this guideline assumes optimal tuning of the parameters.

Execution Information

  • Multithreading type: reentrant (runs in parallel with non-exclusive operators).
  • Multithreading scope: global (may be called from any thread).
  • Processed without parallelization.

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific operators even though the handle is used as an input parameter by those operators.

Parameters

NumInputNumInputNumInputnumInputnum_input (input_control)  integer HTupleintHTupleHtuple (integer) (int / long) (Hlong) (Hlong)

Number of input variables (features) of the MLP.

Default: 20

Suggested values: 1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100

Restriction: NumInput >= 1

NumHiddenNumHiddenNumHiddennumHiddennum_hidden (input_control)  integer HTupleintHTupleHtuple (integer) (int / long) (Hlong) (Hlong)

Number of hidden units of the MLP.

Default: 10

Suggested values: 1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 120, 150

Restriction: NumHidden >= 1

NumOutputNumOutputNumOutputnumOutputnum_output (input_control)  integer HTupleintHTupleHtuple (integer) (int / long) (Hlong) (Hlong)

Number of output variables (classes) of the MLP.

Default: 5

Suggested values: 1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 120, 150

Restriction: NumOutput >= 1

OutputFunctionOutputFunctionOutputFunctionoutputFunctionoutput_function (input_control)  string HTuplestrHTupleHtuple (string) (string) (HString) (char*)

Type of the activation function in the output layer of the MLP.

Default: 'softmax' "softmax" "softmax" "softmax" "softmax"

List of values: 'linear'"linear""linear""linear""linear", 'logistic'"logistic""logistic""logistic""logistic", 'softmax'"softmax""softmax""softmax""softmax"

PreprocessingPreprocessingPreprocessingpreprocessingpreprocessing (input_control)  string HTuplestrHTupleHtuple (string) (string) (HString) (char*)

Type of preprocessing used to transform the feature vectors.

Default: 'normalization' "normalization" "normalization" "normalization" "normalization"

List of values: 'canonical_variates'"canonical_variates""canonical_variates""canonical_variates""canonical_variates", 'none'"none""none""none""none", 'normalization'"normalization""normalization""normalization""normalization", 'principal_components'"principal_components""principal_components""principal_components""principal_components"

NumComponentsNumComponentsNumComponentsnumComponentsnum_components (input_control)  integer HTupleintHTupleHtuple (integer) (int / long) (Hlong) (Hlong)

Preprocessing parameter: Number of transformed features (ignored for PreprocessingPreprocessingPreprocessingpreprocessingpreprocessing = 'none'"none""none""none""none" and PreprocessingPreprocessingPreprocessingpreprocessingpreprocessing = 'normalization'"normalization""normalization""normalization""normalization").

Default: 10

Suggested values: 1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100

Restriction: NumComponents >= 1

RandSeedRandSeedRandSeedrandSeedrand_seed (input_control)  integer HTupleintHTupleHtuple (integer) (int / long) (Hlong) (Hlong)

Seed value of the random number generator that is used to initialize the MLP with random values.

Default: 42

MLPHandleMLPHandleMLPHandleMLPHandlemlphandle (output_control)  class_mlp HClassMlp, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

MLP handle.

Example (HDevelop)

* Use the MLP for regression (function approximation)
create_class_mlp (1, NumHidden, 1, 'linear', 'none', 1, 42, MLPHandle)
* Generate the training data
* D = [...]
* T = [...]
* Add the training data
for J := 0 to NumData-1 by 1
    add_sample_class_mlp (MLPHandle, D[J], T[J])
endfor
* Train the MLP
train_class_mlp (MLPHandle, 200, 0.001, 0.001, Error, ErrorLog)
* Generate test data
* X = [...]
* Compute the output of the MLP on the test data
for J := 0 to N-1 by 1
    evaluate_class_mlp (MLPHandle, X[J], Y)
endfor

* Use the MLP for classification
create_class_mlp (NumIn, NumHidden, NumOut, 'softmax', \
                  'normalization', NumIn, 42, MLPHandle)
* Generate and add the training data
for J := 0 to NumData-1 by 1
    * Generate training features and classes
    * Data = [...]
    * Class = [...]
    add_sample_class_mlp (MLPHandle, Data, Class)
endfor
* Train the MLP
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
* Use the MLP to classify unknown data
for J := 0 to N-1 by 1
    * Extract features
    * Features = [...]
    classify_class_mlp (MLPHandle, Features, 1, Class, Confidence)
endfor

Result

If the parameters are valid, the operator create_class_mlpcreate_class_mlpCreateClassMlpCreateClassMlpcreate_class_mlp returns the value 2 ( H_MSG_TRUE) . If necessary, an exception is raised.

Possible Successors

add_sample_class_mlpadd_sample_class_mlpAddSampleClassMlpAddSampleClassMlpadd_sample_class_mlp, set_regularization_params_class_mlpset_regularization_params_class_mlpSetRegularizationParamsClassMlpSetRegularizationParamsClassMlpset_regularization_params_class_mlp, set_rejection_params_class_mlpset_rejection_params_class_mlpSetRejectionParamsClassMlpSetRejectionParamsClassMlpset_rejection_params_class_mlp

Alternatives

read_dl_classifierread_dl_classifierReadDlClassifierReadDlClassifierread_dl_classifier, create_class_svmcreate_class_svmCreateClassSvmCreateClassSvmcreate_class_svm, create_class_gmmcreate_class_gmmCreateClassGmmCreateClassGmmcreate_class_gmm

See also

clear_class_mlpclear_class_mlpClearClassMlpClearClassMlpclear_class_mlp, train_class_mlptrain_class_mlpTrainClassMlpTrainClassMlptrain_class_mlp, classify_class_mlpclassify_class_mlpClassifyClassMlpClassifyClassMlpclassify_class_mlp, evaluate_class_mlpevaluate_class_mlpEvaluateClassMlpEvaluateClassMlpevaluate_class_mlp

References

Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.

Module

Foundation