Calibrating Trip Distribution Neural Network Models with Different Scenarios of Transfer Functions Used in Hidden and Output Layers

The transfer function is used to process the summation outputs in the hidden and output nodes. It can generally be categorized as either a non-linear or linear function. Examples are Sigmoid and Purelin functions representing non-linear and linear transfer functions. It is often mentioned that there...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal on advanced science, engineering and information technology engineering and information technology, 2020-12, Vol.10 (6), p.2410-2418
Hauptverfasser: Yaldi, Gusri, M. Nur, Imelda, Apwiddhal, -
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The transfer function is used to process the summation outputs in the hidden and output nodes. It can generally be categorized as either a non-linear or linear function. Examples are Sigmoid and Purelin functions representing non-linear and linear transfer functions. It is often mentioned that there is no standard guideline in the transfer function selection, and the Sigmoid or Logsig is widely used. However, the transfer function and training algorithm have a procedural relationship in training Multilayer Feedforward Neural Network (MLFFNN), a famous Artificial Neural Network model structure. In the feedforward stage, this function transforms the linear summation output to either linear (Purelin) or non-linear form (Sigmoid). In the backpropagation stage, this function is used in calculating the magnitude of change in the connection weights involving its derivative. Nine scenarios of MLFFNN were developed based on different transfer functions used in both hidden and output layers. In order to make fair comparisons, each scenario has the same initial connection weight. The modelling is conducted at the calibration level only; however, it involves different levels of complexity. It was calibrated by using the Levenberg-Marquard training algorithm. The results suggest that some calibrations failed and negative estimations occurred once non-linear transfer functions were used in hidden and output layers. It was found that Purelin was superior to other transfer functions. However, it has a weakness which is its negative estimations.
ISSN:2088-5334
2088-5334
DOI:10.18517/ijaseit.10.6.7189