Implementation method of multi-modal learning convolutional neural network model based on multi-feature information acquisition and fusion
According to the invention, a deep learning technology is combined with polarized light microscopic imaging, bright field microscopic imaging and hyper-spectral microscopic imaging technologies, and an implementation method of a multi-modal learning convolutional neural network (MML-CNN) model based...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Patent |
Sprache: | chi ; eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | According to the invention, a deep learning technology is combined with polarized light microscopic imaging, bright field microscopic imaging and hyper-spectral microscopic imaging technologies, and an implementation method of a multi-modal learning convolutional neural network (MML-CNN) model based on acquisition and fusion of various feature information is provided. The method comprises the following steps of: (1) extracting polarized image features at different angles by using a local binary pattern (LBP) algorithm and superposing the polarized image features to realize pixel-level fusion; (2) respectively extracting features of the polarized light image and the bright field image by using a convolutional layer, and superposing the features to realize feature-level fusion; and (3) discriminating the hyper-spectral data by using a three-dimensional convolutional neural network (3D-CNN), combining the discriminated hyper-spectral data with an image result, and then performing statistical analysis by using a |
---|