Deep learning-based visual control assistant for assembly in Industry 4.0
•Language to define industrial manufacturing processes.•Visual assistant to verify product manufacturing processes.•Recognition of actions during the assembly processes.•Dataset specialized in tools for assembly processes. Product assembly is a crucial process in manufacturing plants. In Industry 4....
Gespeichert in:
Veröffentlicht in: | Computers in industry 2021-10, Vol.131, p.103485, Article 103485 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •Language to define industrial manufacturing processes.•Visual assistant to verify product manufacturing processes.•Recognition of actions during the assembly processes.•Dataset specialized in tools for assembly processes.
Product assembly is a crucial process in manufacturing plants. In Industry 4.0, the offer of mass-customized products is expanded, thereby increasing the complexity of the assembling phase. This implies that operators should pay close attention to small details, potentially resulting in errors during the manufacturing process owing to its high level of complexity. To mitigate this, we propose a novel architecture that evaluates the activities of an operator during manual assembly in a production cell so that errors in the manufacturing process can be identified, thus avoiding low quality in the final product and reducing rework and waste of raw materials or time. To perform this assessment, it is necessary to use state-of-the-art computer vision techniques, such as deep learning, so that tools, components, and actions may be identified by visual control systems. We develop a deep-learning-based visual control assembly assistant that enables real-time evaluation of the activities in the assembly process so that errors can be identified. A general-use language is developed to describe the actions in assembly processes, which can also be used independently of the proposed architecture. Finally, we generate two datasets with annotated data to be fed to the deep learning methods, the first for the recognition of tools and accessories and the second for the identification of basic actions in manufacturing processes. To validate the proposed method, a set of experiments are conducted, and high accuracy is obtained. |
---|---|
ISSN: | 0166-3615 1872-6194 |
DOI: | 10.1016/j.compind.2021.103485 |