Schraml, Dominik; Notni, Gunther:
Synthetic training data in AI-driven quality inspection: the significance of camera, lighting, and noise parameters
In: Sensors, Jg. 24 (2024), Heft 2, S. 1 - 18, Artikel 649
2024Artikel/Aufsatz in ZeitschriftOA Gold
Technische Universität Ilmenau (1992-) » Fakultät für Maschinenbau (1992-) » Ohne Institutszuordnung (1992-) » Fachgebiet Qualitätssicherung und Industrielle Bildverarbeitung (2015-)
Titel in Englisch:
Synthetic training data in AI-driven quality inspection: the significance of camera, lighting, and noise parameters
Autor*in:
Schraml, DominikTU
GND
1317938356
ORCID
0009-0002-4728-404XORCID iD
SCOPUS
57211193829
Sonstiges
der Hochschule zugeordnet
korrespondierende*r Autor*in
;
Notni, GuntherTU
GND
172636973
ORCID
0000-0001-7532-1560ORCID iD
SCOPUS
57225127198
SCOPUS
7004204934
Sonstiges
der Hochschule zugeordnet
Erscheinungsjahr:
2024
Open-Access-Publikationsweg:
OA Gold
PubMed ID
Scopus ID
PPN:
Sprache des Textes:
Englisch
Schlagwort, Thema:
Quality control ; Defect Detection ; Synthetic Data ; Blender ; Rendering Parameter ; AI inspection
Datenträgertyp:
Online-Ressource
Ressourcentyp:
Text
Lizenztyp:
CC BY 4.0
Access Rights:
Open Access
Peer Reviewed:
Ja
Teil der Statistik:
Ja

Abstract in Englisch:

Industrial-quality inspections, particularly those leveraging AI, require significant amounts of training data. In fields like injection molding, producing a multitude of defective parts for such data poses environmental and financial challenges. Synthetic training data emerge as a potential solution to address these concerns. Although the creation of realistic synthetic 2D images from 3D models of injection-molded parts involves numerous rendering parameters, the current literature on the generation and application of synthetic data in industrial-quality inspection scarcely addresses the impact of these parameters on AI efficacy. In this study, we delve into some of these key parameters, such as camera position, lighting, and computational noise, to gauge their effect on AI performance. By utilizing Blender software, we procedurally introduced the “flash” defect on a 3D model sourced from a CAD file of an injection-molded part. Subsequently, with Blender’s Cycles rendering engine, we produced datasets for each parameter variation. These datasets were then used to train a pre-trained EfficientNet-V2 for the binary classification of the “flash” defect. Our results indicate that while noise is less critical, using a range of noise levels in training can benefit model adaptability and efficiency. Variability in camera positioning and lighting conditions was found to be more significant, enhancing model performance even when real-world conditions mirror the controlled synthetic environment. These findings suggest that incorporating diverse lighting and camera dynamics is beneficial for AI applications, regardless of the consistency in real-world operational settings.