Predicting the Heading Angle of Resin During Extrusion Using Semantic Segmentation Based on Edge-Region Focal Loss

In this article, a method using a semantic segmentation method based on edge-region focal loss (ERFL) was proposed to estimate the heading angle of resin in a catheter-extrusion process. The approach leveraged an improved semantic segmentation facilitated by this new loss function and principal comp...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on instrumentation and measurement 2024, Vol.73, p.1-15
Hauptverfasser: Lee, Sang Heon, Kim, Min Young, Woo, Min Woo, Lee, Han Chang, Won, Hong-In, Jeong, Seung Hyun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this article, a method using a semantic segmentation method based on edge-region focal loss (ERFL) was proposed to estimate the heading angle of resin in a catheter-extrusion process. The approach leveraged an improved semantic segmentation facilitated by this new loss function and principal component analysis. Accurate heading angle estimation was critical and depended on the precision of segmentation, demanding robust and precise segmentation even in the presence of external disturbances. The ERFL enhanced segmentation by heavily weighting areas with ambiguous boundaries, which was particularly important in scenarios with various semantic elements in the background and foreground or near object boundaries. Image data were collected using red green blue (RGB) cameras to validate the effectiveness of this method. The method's accuracy was affirmed by the mean intersection over union (mIoU) and mean absolute error measurements, achieving mean absolute errors of the angle and mIoU at 0.5002 and 0.8657, respectively. These results demonstrate the method's suitability for monitoring the extrusion process. Furthermore, compared to traditional loss functions, the ERFL shows superior performance in segmenting adjacent boundary regions between the background and objects and maintains robustness in noisy environments.
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2024.3418107