Schraml, Dominik; Notni, Gunther:
Synthetic training data in AI-driven quality inspection: the significance of camera, lighting, and noise parameters
In: Sensors, Vol. 24 (2024), No. 2, pp. 1 - 18, Article 649
2024Journal article in JournalOA Gold
08 Ingenieurwissenschaften » 690 Maschinenbau/Verfahrenstechnik » 6940 Produktions- und FertigungstechnologieTechnische Universität Ilmenau (1992-) » Department of Mechanical Engineering (1992-) » Without Institute Allocation (1992-) » Fachgebiet Qualitätssicherung und Industrielle Bildverarbeitung (2015-)
Title in English:
Synthetic training data in AI-driven quality inspection: the significance of camera, lighting, and noise parameters
Author:
Schraml, DominikTU
GND
1317938356
ORCID
0009-0002-4728-404XORCID iD
SCOPUS
57211193829
Other
connected with university
corresponding author
;
Notni, GuntherTU
GND
172636973
ORCID
0000-0001-7532-1560ORCID iD
SCOPUS
57225127198
SCOPUS
7004204934
Other
connected with university
Year of publication:
2024
Open-Access-Way of publication:
OA Gold
PubMed ID
Scopus ID
PPN:
Language of text:
English
Keyword, Topic:
Quality control ; Defect Detection ; Synthetic Data ; Blender ; Rendering Parameter ; AI inspection
Media:
online resources
Type of resource:
Text
Licence type:
CC BY 4.0
Access Rights:
open access
Peer Reviewed:
Yes
Part of statistic:
Yes

Abstract in English:

Industrial-quality inspections, particularly those leveraging AI, require significant amounts of training data. In fields like injection molding, producing a multitude of defective parts for such data poses environmental and financial challenges. Synthetic training data emerge as a potential solution to address these concerns. Although the creation of realistic synthetic 2D images from 3D models of injection-molded parts involves numerous rendering parameters, the current literature on the generation and application of synthetic data in industrial-quality inspection scarcely addresses the impact of these parameters on AI efficacy. In this study, we delve into some of these key parameters, such as camera position, lighting, and computational noise, to gauge their effect on AI performance. By utilizing Blender software, we procedurally introduced the “flash” defect on a 3D model sourced from a CAD file of an injection-molded part. Subsequently, with Blender’s Cycles rendering engine, we produced datasets for each parameter variation. These datasets were then used to train a pre-trained EfficientNet-V2 for the binary classification of the “flash” defect. Our results indicate that while noise is less critical, using a range of noise levels in training can benefit model adaptability and efficiency. Variability in camera positioning and lighting conditions was found to be more significant, enhancing model performance even when real-world conditions mirror the controlled synthetic environment. These findings suggest that incorporating diverse lighting and camera dynamics is beneficial for AI applications, regardless of the consistency in real-world operational settings.