A Layer-Wise Data Augmentation Strategy for Deep Learning Networks and Its Soft Sensor Application in an Industrial Hydrocracking Process
In industrial processes, inferential sensors have been extensively applied for prediction of quality variables that are difficult to measure online directly by hard sensors. Deep learning is a recently developed technique for feature representation of complex data, which has great potentials in soft...
Gespeichert in:
Veröffentlicht in: | IEEE transaction on neural networks and learning systems 2021-08, Vol.32 (8), p.3296-3305 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In industrial processes, inferential sensors have been extensively applied for prediction of quality variables that are difficult to measure online directly by hard sensors. Deep learning is a recently developed technique for feature representation of complex data, which has great potentials in soft sensor modeling. However, it often needs a large number of representative data to train and obtain a good deep network. Moreover, layer-wise pretraining often causes information loss and generalization degradation of high hidden layers. This greatly limits the implementation and application of deep learning networks in industrial processes. In this article, a layer-wise data augmentation (LWDA) strategy is proposed for the pretraining of deep learning networks and soft sensor modeling. In particular, the LWDA-based stacked autoencoder (LWDA-SAE) is developed in detail. Finally, the proposed LWDA-SAE model is applied to predict the 10% and 50% boiling points of the aviation kerosene in an industrial hydrocracking process. The results show that the LWDA-SAE-based soft sensor is superior to multilayer perceptron, traditional SAE, and the SAE with data augmentation only for its input layer (IDA-SAE). Moreover, LWDA-SAE can converge at a faster speed with a lower learning error than the other methods. |
---|---|
ISSN: | 2162-237X 2162-2388 |
DOI: | 10.1109/TNNLS.2019.2951708 |