A Novel Multi-Modal Learning Approach for Cross-Process Defect Classification in TFT-LCD Array Manufacturing

In the field of thin-film transistor liquid crystal display (TFT-LCD) manufacturing, the challenge of automated defect classification across multi-layered array processes is profound due to the intricate patterns involved. Traditional deep learning approaches, while promising, often fail to achieve...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on semiconductor manufacturing 2024-11, Vol.37 (4), p.527-534
Hauptverfasser: Liu, Yi, Lee, Wei-Te, Lu, Hsueh-Ping, Chen, Hung-Wen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In the field of thin-film transistor liquid crystal display (TFT-LCD) manufacturing, the challenge of automated defect classification across multi-layered array processes is profound due to the intricate patterns involved. Traditional deep learning approaches, while promising, often fail to achieve high accuracy in cross-process recognition tasks. To address this gap, we propose a multi-modal learning approach that synergistically combines a knowledge engineering technique called Descriptive Embedding Generation (DEG) with a cross-modal contrastive learning strategy. Unlike conventional methods that primarily rely on visual data, our approach incorporates fine-grained descriptive information generated by DEG, enhancing the discriminative power of the learned model. The performance of this innovative training strategy is demonstrated through rigorous experiments, which show a notable accuracy improvement ranging from 0.92% to 7.89% over existing methods. Our approach has been validated by a leading TFT-LCD manufacturer in Taiwan, confirming its practical relevance and setting a new benchmark in cross-process and multi-product defect classification. This study not only advances the state of defect classification in smart manufacturing but also paves the way for future research in complex recognition tasks.
ISSN:0894-6507
1558-2345
DOI:10.1109/TSM.2024.3448359