On the digital twin application and the role of artificial intelligence in additive manufacturing: a systematic review

Additive manufacturing (AM) as a highly digitalized manufacturing technology is capable of the implementation of the concept of the digital twin (DT), which promises highly automated and optimized part production. Since the DT is a quite novel concept requiring a wide framework of various technologies, it is not state of the art yet, though. Especially the combination with artificial intelligence (AI) methods is still challenging. Applying the methodology of the systematic review, the state of the art regarding the DT in AM with emphasis of required technologies and current challenges is assessed. Furthermore, the topic of AI is investigated focusing the main applications in AM as well as the possibility to integrate today’s approaches into a DT environment.


Introduction
Additive manufacturing (AM) is becoming increasingly relevant in manufacturing with a double-digit market growth for most of the last 30 years [1] and is expected to drastically change the status and shape the way things are produced and brought to the market [2]. This appraisal of AM is based on the new possibilities AM offers in the design and functionality of a technical product [3] as well as in the design of business processes [4].
The nature of the AM technologies is characterized by the demand of digital data for product realization [5], enabling the implementation of another trend emerging from the global digital transformation: the digital twin (DT).
To date, there is a diverse and incomplete understanding of the DT, due to multiple existing solutions and concepts [6]. In its original form, the DT is defined as a digital informational construct about a physical system, which should include all information regarding the system asset that could be obtained from its thorough inspection of the physical system [7]. Glaessgen and Stargel [8] give a more detailed definition, which is widely recognized [6]: here, the DT is described as an 'integrated multi-physics, multi-scale, probabilistic simulation of a complex product' whose aim is to mirror the life of the corresponding twin by using the physical models, sensor updates, etc available. In 2017, Negri et al [9] gave an overview of the definitions of the DT in scientific literature in the time interval of 2012-2016, comparing 16 different sources with an individual definition each. This demonstrates the variety in the understanding of the concept 'digital twin' .
Additionally to the variance in the definition of the DT, several terms are used synonymously, namely 'Digital Model' , 'Digital Shadow' , and 'Digital Twin' . Kritzinger et al [6] proposed a classification of DTs according to those terms, which they linked to different levels of data integration: a digital model (DM) is defined as a digital representation of a physical object without any automated data exchange between the digital and physical object. The digital object can include a description of the physical object with varying degrees of complexity such as simulation models. As there is no automated data exchange, a change in state of the physical object has no direct effect on the digital object. The DM is then extended to the concept of the digital shadow (DS), where there is a one-way data flow meaning that a change in state of the physical object is changing the digital object as well, but not vice versa. If there is a two-way automated data flow, the system of the digital and physical object is considered a DT. In this combination, the digital object can act as a controlling instance of the physical object. There may be other objects, digital or physical, which can induce changes of state in the digital object. A change of state in the physical object leads to a change of state in the digital object, and vice versa. This classification is adopted in this study.
Since the combination of AM, which is considered a digital manufacturing technology, and the DT is highly promising in regard of the digital transformation of the manufacturing industry, it is of high importance to understand the current state of the application as well as future research needs to achieve the goal of a DT in AM. This study aims to give an overview of the state of the art and identify the topics research needs to investigate by applying the method of a systematic review. Readers interested in the technical details of a specific field are referred to the respective references.

Approaches towards the implementation of a DT
To assess the current state of the art in terms of the implementation of the DT in AM, the method of the systematic literature review is applied. The systematic review is a procedure well established in medicine [10,11] and is becoming increasingly common in management, accounting and finance [12]. A systematic review differs from traditional, narrative literature reviews by following a replicable, methodical and transparent procedure [10,12]. The aim of the systematic approach lies within the minimization of bias due to the selection of some studies over others. The idea of the systematic review is to collect available evidence and then give an evaluation of the evidence against predetermined criteria. That way the systematic review is able to keep the balance between identifying a larger amount of publications and further identifying a smaller set of studies that fit the defined criteria and can inform research agendas [12].
Siddaway et al [10] define five key stages in the conduction of a systematic review: scoping, planning, identification (searching), screening, and eligibility. During the scoping phase, one or more research questions (RQs) to be answered by the review are formulated. This is an important distinction to the traditional literature review, which generally aims to give an overview of a general topic rather than answering a specific question. The step of planning has two important objectives: first, search terms are defined. A challenge is provided by different terminologies to be considered as especially in emerging fields of research the terminology is not standardized yet. In a second step, inclusion and exclusion criteria are defined that will allow to specifically address the RQ. During the identification phase, the actual literature search takes place. Here, at least two different electronic databases are searched by applying the pre-defined search terms to parts of articles, e.g. title, abstract, or full text. Additionally, it can also be searched for unpublished work, which may be difficult to locate and obtain, though. After the identification phase is completed, the results are screened in a subsequent step. Within this step, the title and abstract of the identified publications are read and checked for the inclusion or exclusion criteria, respectively. As most of the identified works will not meet the inclusion criteria, it is recommended to note the number of rejected articles. In the last phase of the systematic review, the remaining publications are read completely to determine their eligibility for inclusion. Even in this stage, the number of included works can be decreased drastically. Furthermore, all relevant information is extracted from the read full texts.
In the following, the presentation of the systematic review is split in two main sections: first, the methodology is outlined, including the key steps just described. The second section gives the overall results of the review and aims at answering the RQs formulated in the beginning. Next, a list of search terms is created. As this study focuses on AM only, a combination of two main categories is chosen. The first category includes the term 'digital twin' , whereas the second category consists of the AM process names and their acronyms as well as variations of both. For the inclusion or exclusion criteria, respectively, it is defined that only publications dealing with AM and at least a part of the DT are included. The work of the studies has to be intended for the creation of a DT.
In the identification phase, six different databases are searched by entering the combinations of the category 1 and 2 search terms. To reduce the amount of possible results, only the title and the publications keywords are searched. A total number of 223 publications is identified.
In the screening step, the abstracts of the identified studies are read first. Afterwards, 26 publications remain for full-text analysis. During the reading of the complete studies, 14 works are sorted out, leaving 12 relevant publications for the evaluation of the RQs.

Results of the systematic review
The result of the final screening step is summarized in table 1. The studies are examined and categorized with respect to the point of view, area of application, investigated AM process, overall approach, and the level of integration.

Point of view and investigated AM processes
Most papers (7 out of 12) have a theoretical point of view on the topic of DT in AM, looking into concepts and models (see figure 2). On the practical level, all studies focus on the implementation of sensors for data acquisition. While Nagar et al [5] use sensors to predict maintenance intervals of the manufacturing system, the remaining studies apply sensors to monitor the manufacturing process and predict material properties. Studies working on practical issues for a DT are most likely to study the material extrusion of polymers (MEX/P), also known as fused deposition modeling or fused filament fabrication, while metal AM is investigated on a theoretical level. Anand et al [13], Mukherjee and DebRoy [19], as well as Lui et al [18] describe metal AM in general, whereas Latipova and Baltimerov [17] as well as Wagener et al [22] focus on the laser powder bed fusion of metals (PBF-LB/M).
Hehr et al [15] and Qin et al [20] apply other processes, namely ultrasonic AM and powder bed fusion of polymers, respectively. Ko et al [16] do not further specify a certain AM process of interest. An overview of the different AM technologies is given in figure 3.

Area of application
In terms of the area of application, there is an overwhelming concentration on the part quality monitoring and determination (see figure 4). Still, other areas of application emerge, e.g. maintenance with the work of Nagar et al [5] or Qin et al [20], who aimed at determining the energy consumption of manufacturing systems. Latipova and Baltimerov [17] developed a process parameter optimization framework making use of the digital techniques. Finally, Anand et al [13] made use of a PLM system for education on AM.

Level of integration
Applying the definition of the DMDM, DS, and DT presented in section 1, it is apparent that most studies (9 out of 12) desribe a DS. Only two studies present a framework where the digital system is utilized to manipulate the physical system. Classification of some studies [16,17,23] is difficult, though, as no precise information about the interaction between the digital and physical system is given.
Both approaches identified to match the given definition of a DT are on the theoretical level. Mukherjee and DebRoy [19] propose the combination of a mechanistic model, a sensing and control model, a statistical model, big data, and machine learning (ML) to create a DT to shorten the qualification process of additively manufactured metallic parts. The mechanistic, statistical and control models predict the manufacturing process and determine process variables, which are sensed and controlled by the physical system. Sensor and simulation data is combined to a big data batch, which is processed by the ML algorithm. The ML algorithm constantly refines the simulation models, closing the loop of information flow. Mukherjee and DebRoy identify further research needs in all building blocks of the DT, though: the mechanistic models at all scales need to be coupled and advanced to be able to run within a reasonable time. The sensing data should be captured more accurately with enhanced resolution and precision of the sensors. Furthermore, soft-and hard-ware for data management, analysis, and decision-making need to be improved. Finally, all the different blocks have to be integrated into one general framework by establishing strong rapid interactions among them.
The second work classified as DT is by Lui et al [18]. They present a collaborative data management framework for metal AM, where a cloud application as the core of the framework communicates with distributed sub-frameworks of the different product lifecycle stages. A metal AM product data model is proposed, containing a list of specific product lifecycle data that influences the product quality. Additionally, Lui et al develop an application scenario of deep learning-based layer defect analysis, aiming to enable both off-line product design and process optimization and on-line layer defect detection. It is open to develop simulation and prediction models.
Notable is the use of artificial intelligence (AI) in both studies discussed as well as other work listed in table 1, especially in combination with other Internet of Things (IoT) techniques. Nagar et al [5], Latipova and Baltimerov [17], and Wang et al [23] implement AI methods to process the data collect via implemented sensors. Theoretical concepts [16,19,20] are designed to integrate a subsequent AI application or plan to do so in future research [18].

Discussion of the current approaches towards the implementation of a DT
During the systematic research performed, 12 relevant publications have been identified. Only two, though, have been clearly classified to present a DT. The remaining studies mostly range at the level of a DS, indicating that there has been progress in the development of a DT by moving away from a DM towards the DS. Therefore, the current focus of research is found to be within the realization of data acquisition and data processing. Real-time prediction and corresponding adjustment of the design and/or manufacturing process is thought of theoretically. Additionally, when dealing with specific sensors for data acquisition, the overall picture the DT is aiming for is lost out of sight, resulting in sub-solutions that need to be assembled to create the DT.
One reason for the DT not being implemented in the identified studies yet is the availability of the necessary technology. Cohen et al [24] identified six key enabling technologies to achieve the Industry 4.0 goal: the first is the IoT technology, referring to the robust communication between digital and physical worlds. This includes sensors, actuators or mobile devices that are operated via Wi-Fi, Bluetooth, cellular networks or near field communication, as well as bar codes or social networks. Next is the use of vision systems such as motion capture to digitalize human activities. The third technology identified as relevant to Industry 4.0 is the ubiquitous computing, describing the seamless integration of a virtual computer model into physical networks of objects. Smart devices capable of integrating devices and information systems or data sharing and exchange enable this. Furthermore, to be capable of the processing of the enormous amount of data acquired (also called 'big data'), cloud computing is a crucial technology. The fifth key technology are the cyber physical systems (CPS) as the core of  table 1 in order to answer RQ 1, it is noticeable that the aspects of IoT and CPS are very prominent. This is mainly due to the focus of sensor integration as IoT is the technology to enable the communication with those sensors while the whole system can be regarded a CPS. The application of cloud systems (not cloud computing, though) and data management can be found as well. For the data processing, mainly AI is applied. Besides the challenge of developing a general framework able to incorporate all physical and digital aspects, a second area has not been touched yet in the studies presented in table 1: the modeling of the physical system for real-time prediction. Only Wagener et al [22] included the transfer of experimental data into a modeling approach to advance the prediction of part fatigue properties.

Challenges of the DT implementation
Looking at the overall picture created by the presented systematic review, little progress as well as effort towards the development of a complete DT in AM seems to happen. However, this is not true in regard of innovation in the required key technologies and involved fields of application.
Numerical modeling of the AM processes is intensively researched as it provides the possibility to avoid extensive experimental investigations to increase our understanding of the manufacturing process' physics [25]. Due to the various length scales of energy input, feedstock material, part size, manufacturing equipment and production site, multiple scales of modeling have developed. While the modeling methods and algorithms constantly improve, reducing the error of the computation, the computational times have not been reduced to the point of real-time computation, preventing the direct integration of those models. Additionally, the complexity of multi-scale modeling increases the required computational effort further [26].
As identified in the previous section, the possibility of closed-loop process control is a key factor in order to establish the two-way data flow of the DT, which is still missing most times. Bikas et al [27] mapped available process control in AM and compared it to the key process variables involved in process control. They concluded that while the potential of production quality improvement is already being explored, the accuracy of models in the literature is not sufficient and requires further verification. Jared et al [28], who also highlight the current contradiction of model complexity and computational resources, share this conclusion. More recently, Kim et al [29] point out a trend for in-situ process correction in various AM technologies. This is attributed to the increased availability of integrated sensors and camera systems. They use a closed-feedback loop to couple the design and manufacturing process, enabling the real-time correction of the design based on the manufacturing parameters. Radel et al [30] as well as Li et al [31] demonstrate the implementation of self-regulating wire-arc AM. Radel et al control the position of the nozzle only, though, no optimization of the deposition process is performed. However, this is the case in [31], where the deposition rate is adjusted to increase the dimensional accuracy of thin walls. Rivera and Arciniegas [32] summarize commercial systems for in-situ parameter correction as well as selected academic approaches. They found that while all approaches are successfully validated, the respective frameworks focus on one or two specific parameters only. Therefore, more sensors need to be integrated into the process to create a more robust system for real-time monitoring and control.
An essential prerequisite to closed-loop process control is the availability sensors and measuring equipment. Currently, an increasing amount of equipment is being implemented and tested in AM manufacturing systems: Chua et al [33] identified the application of infrared cameras, photodiodes, pyrometers, CCD high-speed cameras, and non-destructive testing equipment as key technologies for in-situ process monitoring. This conclusion is backed by the fact that especially camera systems and coaxial sensors operating on the thermal radiation are being implemented in state-of-the-art commercial quality monitoring systems [34]. In [32], a summary of active research areas includes the supervision of motion in addition to the monitoring of the material deposition, solidification, or melting in the various AM technologies. For this task, rotational as well as linear optical encoders [35], accelerometers [36], and attitude sensors [37] are used. In addition, acoustic emission testing shows high potential in detecting micro-cracks pores during manufacturing [34].
The references provided and discussed [25][26][27][28][29][30][31][32][33][34][35][36][37] introduce the respective topics relevant to the DT. Nevertheless, they by far do not cover all innovation and developments. The discrepancy between the systematic review result and the literature available on single topics highlights one of the major challenge in the implementation of DT: as the DT is of great complexity and involves numerous disciplines, it stands to reason that the goal of a complete DT cannot by achieved by a single research group. Consequently, communication between different groups working on selected parts of the DT is a crucial factor. If no direct contact of the groups is existent, this is possible only by the use of common, specific key words. The numbers of the different screening steps of the systematic review give an example of this issue: on the one hand, the number of publications after the database search was quite high and then reduced to only 12, indicating an excessive use of the considered key words even though no relation is evident. On the other hand, countless publications relevant to part of the scope have not been identified.
A prerequisite to the consistent use of appropriate key words in publications is a global agreement on the definition of key terms, though. Another essential part to enable the DT's construction out of various groups' results is the implementation of standards and interfaces, ensuring the interconnectivity of partial systems [38]. More precisely, Lee et al [39] summarize that standardization in the field of AM as well as DT includes standardization of the control, the interface, the interoperability, the definition and the integration in the area of industrial cyber and physical manufacturing systems. Since AM and DT are relatively new technologies, there are no final standards yet. Nonetheless, many efforts are made in order to create standards [40]. Two of the main organizations for standardization, the International Organisation for Standardization and the American Society for Testing and Materials, combined their efforts in 2013 [41] for improving efficiency and preventing confusion due to differing formulations in competing standards.
Even with standards present, the implementation of their elements will be subject to discussion in industry and academics, as Jacoby and Usländer [38] point out in the field of DT. An exception from this statement is the definition of data and communication interface formats. In AM, the definition of a specific data format for geometrical data, the so-called AM format (AMF), is designed to target several issues: file format diversity induced by the various computer-aided design (CAD) software providers, issues in accuracy and data security of the current exchange file formats STEP and STL, as well as the file format diversity in terms of manufacturing system files [39,41]. AMF is supposed to be independent of the manufacturing system and contains no information specific to a single AM technology. In order to achieve the interoperability of the various components of a DT, machine protocols such as Open Platform Communications Unified Architecture (OPC UA) are essential as they allow the exchange of information and the interaction of different systems [42,43]. An example of a successful use of OPC UA to create a cyber-physical system of an MEX/P system is given in [44]. Here, a control system based on OPC UA is proposed and validated. However, the authors note that existing control mechanisms in MEX/P lack the ability to incorporate physical feedback and provide sparse options for extensions of the system. Therefore, the implementation of a DT following the definition of section 1 is not possible today.
In summary, in order to answer RQ 2, the following challenges to the implementation of the DT in AM are identified: • standardization and consistent use of terminology • development and standardization of general frameworks and interfaces (physical and digital) • reduction of model complexity for prediction

Comparison to other fields of application
While AM's technological characteristics promote the development of a DT, this does not imply missing innovation and progress in fields other AM related to DT. Table 1 names four main areas of application: education, manufacturing, quality monitoring, and predictive maintenance. To put the results of the systematic review in the big picture of industrial production, a short introduction to non-AM works is provided.
In terms of manufacturing, Cimino et al [45] performed a systematic literature review focusing on the application of the DT in manufacturing or production, respectively, independent of the considered manufacturing technology. They identified 52 articles creating several application purposes within manufacturing, ranging from the obvious production monitoring and optimization, design or maintenance of a production system, to the production management. Overall, Cimino et al found two major gaps in the current development of the DT: first, the integration of the DS with the physical control system is missing with few exceptions, leaving published frameworks at the level of a DS rather than a DT. Second, only a limited set of 'services' are offered in a single DT application, according to the service classification of Tao et al [46]. In most cases, the scope of the developed DT is strictly related to the application in focus, rarely scalability e.g. of a tool equipment DT to a production line DT is taken into account. Cimino et al then present a DT framework addressing the issues observed during their literature review. It is applied and tested with reduced functions at the laboratory scale. He and Bai [47] introduce the concept of sustainable intelligent manufacturing to DT. When reviewing literature on both DT and sustainable manufacturing, they identified a significant overlap in required IoT technologies suggesting good compatibility of both aspects. Similar to [45], they present a theoretical framework of their own integrating the concept of sustainable manufacturing in the DT. In [48], Bao et al focus on the aspect of modeling for DTs of the product, the manufacturing process, and the operations at factory level. They find a lack of interfaces between the respective models, and proceed to derive a method for constructing DT models capable of interconnection.
In the field of quality monitoring, the main challenge remains for the amount of data that can be acquired by sensors and the respective processing. In order to develop methods capable of processing the data volume achievable, recent research trends either aim at the application of AI [49] or try to reduce the complexity by hierarchical structuring [50] and the development of DT architectures operating on less variables [51]. However, most works operate on the level of a DS. This is also true in the field of predictive maintenance [52][53][54], where the goal of DT application is the prediction of the remaining useful life of a component. While the data provided by the physical system constantly updates the prediction model, no automatic control of the physical system due to the prediction result is enabled.
The DT's concept may not be used in the development and production of goods only. The application in technical education receives increasing interest, and is pushed by the global change towards digital methods in education due to the pandemic. In engineering, the virtual system enables students to experience technical systems hands-on. This concept has been applied to various courses such as production engineering [55] or construction [56,57]. Interestingly, the students are not always limited to the virtual object: Liljaniemi et al [58] set up a physics-based interactive tool-simulation for machining and connected it to a physical system, enabling the remote control via the simulation. The influence of the physical system on the virtual one is not implemented, though, therefore not meeting this work's definition of DT. The excavator, which is the lecture's subject in [57], provides two-way communication by manual remote control and sensor feedback, but the direct influence on the respective other system is not automated. Aside from providing digital education environments, the DT can also be applied to the student itself [59]. Massive open online courses (MOOC) is a disruptive technology currently challenging traditional classroom teaching, as university courses are made available at a very nominal cost. MOOCs also present the opportunity to personalize education, with the goal to meet the student's individual needs based on his natural talent, interest, and background. Adaptive learning frameworks are constructed in order to assess the student's current level of knowledge and the most efficient ways he is learning, and then suggesting corresponding courses or even picking the most helpful type of task to reach an education goal.
In [45][46][47][48][49][50][51][52][53][54][55][56][57][58][59], a broad overview of various aspects to the application of the DT is given. In comparison with AM, it demonstrates that the main challenges remain the same, regardless of the field of application or the technologies considered. This observation highlights the importance to keep track of the developments in all disciplines and to enable the interchange of ideas and solutions between them in order to achieve the goal of a true DT's implementation.

Key technology AI
A key technology mentioned and applied in DT-related publications are the methods of AI. Similar to the field of data processing, where the standard methods are not able to perform well when confronted with the amount of data acquired, AI may assist in enabling real-time prediction of the manufacturing process and part performance, which can help improve the design and manufacturing phase of a product. Contrary to numerical simulation, AI is capable of real-time decisions. The major drawback of AI is the amount of necessary training datasets to achieve reasonable results, which can be done experimentally only at high cost and effort. Here, numerical simulation can provide high amount of reasonable training data with much lower effort. Therefore, the combination of AI and numerical simulation, enhanced by experimental data, poses a possible solution to the development of the missing prediction system of the DT.
As AI seems to be a crucial technology to the DT in both data processing and real-time prediction, while the implementation of sensors is assumed solvable by AM system manufacturers, a second systematic review is conducted to identify the research needs to mature the AI technology in AM for the completion of the DT system.

AI in AM
AI is defined as the effort to automate intellectual tasks normally performed by humans [60, p 4]. The definition is therefore very generous and can be used as a generic term for different concepts that generally include learning-as well as rule-based approaches. ML is only one path besides e.g. natural language processing, but a promising one to achieve AI. ML is the study of computer algorithms that allow computer programs to automatically improve through experience [61, p 2], so in comparison to classic programming, where rules and data are turned into answers, for ML, data and answers are turned into rules. To do so, ML methods employ statistical methods to process data and build data-driven models, which are able to improve by themselves without being explicitly programmed. Gaining popularity in various sectors for application in image recognition, data prediction, decision-making, and trend prediction, ML is increasingly utilized in the manufacturing sector [62].
In the context of the solutions of the AM process investigated in this work, this is achieved by finding patterns in process describing data, which pertains to a known and well-defined quality attribute of the final part. An important goal and indispensable achievement of a model is generalization. Generalization requires a good performance, which needs to be measured by a verifiable objective metric. It is dependent of the algorithm and data, regardless of the input consisting of unknown or known data [63]. A major challenge is gaining a sufficient quantity of high quality data, which is representative with only few outliers and small noise.
ML algorithms can be categorized in supervised learning with labeled data, unsupervised learning with unlabeled data, and reinforcement learning (RL) with rewarded interactions with the environment. Based on the problem as well as on the gathered data, different algorithms will be considered. Furthermore, innovation in the area of computational hardware play a crucial role in the development of AI: on the one hand, very complex problems like natural language disambiguation are solvable by simple algorithms, given enough data [64], on the other hand older developments like neural networks (NN), invented in the 1950s, which are built in a structure similar to the human brain, come to new success with a year-to-year higher computational power. These possibilities lead into the development of bigger and deeper NNs, which can contain hundreds of layers of representations of the data. This is called deep learning.
The success of ML concepts for problems like image classification or language translation and generation leads to the assumption that there are also achievements through the finding of patterns in the field of AM. The AM process is a highly complex physical process and produces a high amount of different types of raw data. This multi-dimensional data is difficult to be interpreted by humans, indicating high potential for the effective application of ML. The challenge of the high cost of categorization of the produced data is not to be neglected, though.
As already described in section 2, the same method of the systematic review is applied. This section focuses on the current state of the art in terms of the implementation of AI and therefore ML in AM. The following presentation of the systematic review is split in two main sections: first, the methodology is outlined, including the key steps just described. The second section gives the overall results of the review and aims at answering the RQs formulated in the beginning.

Methodology of the systematic review
The review is based on the methodology described figure 5. The scoping consists of two different RQs, which are giving the structure throughout this review:

RQ 1: What is the main challenge in the AM process chain for which AI is used?
Publications with the focus on the DT mention that the usage of AI would be an improvement to their model. Nevertheless, it is important to identify the areas where AI is already practically applied to be able to judge the actual capabilities of AI in AM today. The second question relates to the context of DT:

RQ2: Is the possible later use in larger contexts such as DT considered in the AI development?
When an algorithm is integrated into a framework, there are defined requirements to the algorithm in order to ensure the interconnectivity in terms of communication and data transfer. Therefore, it would reduce the effort of integration of certain AI works if the larger context of already considered and necessary features such as interfaces and protocols are planned for. there are keywords of two different categories: Category 1: Terms that refer to AI and ML in general as well as specific through models like neural nets Category 2: Additive Manufacturing in General and the single types of processes in special For answering RQ1 a list of terms is created. The RQ2 is mainly based on the results of the screening. Figure 5 shows the searched databases and combinations of words from the two categories in keywords and titles.
After the identification phase, there are 200 publications with a context of AI in AM. Many topics are investigated, ranging from data preparation, design, process automation and monitoring, towards a better cost estimation of the final part. For further reduction of the amount of publications, the important topic of quality assurance is chosen. Industrialized AM is part of industries like aerospace and medical part production. Therefore, the achieved quality is an important and indispensable criterion for the broad industrial application of AM.
Fifty-four studies are discussing the topic of quality assurance, but further inspection showed that nine publications are either off-topic or reviewing the current state of the art, leaving 45 works for full-text evaluation. After this step, 38 publications are considered for the review.

Results of the systematic review
The result of the final screening step is summarized in table 2. The publications are categorized with respect to the investigated AM process, applied AI method, and the quality assurance aspect focused on.

Investigated AM processes
A large portion of the studies examine either the PBF-LB/M or the MEX/P process (see figure 6). This is on the one hand due to the high availability of printers of the MEX/P process in university environments and on the other hand due to the application of PBF-LB/M produced parts in industry and therefore the need of advanced quality assurance systems. Nevertheless, other processes such as direct energy deposition (DED) [69,73,84,85,89], PBF-EB/M [83], liquid metal flow rapid cooling (LMFRC) [82] and digital light processing (DLP) [90] are subject of the investigations.

Applied AI methods
The main aspect of choosing the right algorithm is based on the available data as well as on the desired outcome, although many algorithms can be used for classification and for regression tasks. Many different approaches are used in the examined publications. In the next section is presented a short overview. Decision trees (DT) can make predictions based on conditions with calculated limits and Random Forests (RF) are an ensemble of DT. The support vector machine (SVM) is a powerful algorithm, which divides the data through linear or nonlinear vectors for clustering them, as well as K-Means Clustering, the linear and quadratic discriminant analysis (LDA, QDA) and Gaussian mixture models (GM).
Convolutional neural nets (CNN) are (Artificial) neural nets, which have a special convolutional layer for the condensation of values in a small region concentrating on small low-level features of the presented data, which is useful for recognition in image data. Deep NNs (DNN) consists of a big amount of layers. Self-Organizing Maps (SOMs) are also NNs but designed for reducing dimensionality.  Genetic algorithms (GA) are inspired by the process of evolutionary natural selection to find the solution for an optimization problem by iteration through many possible solutions.
Bayesian Networks (BNs) and Bayesian Dirichlet processes (DP) are based on probabilistic models, as well as the probabilistic classifier naïve Bayes. The logistic regression (LR), ridge regression (RR) and the support vector regression (SVR) algorithm can be used for calculating the probability of the belonging to an estimated class. The main idea of adaptive boosting (AB) is to combine several weak predictors to a single strong predictor.
For further details on the respective AI methods, the reader is referred to [63].

Investigated quality assurance aspect
Within the field of quality assurance, various aspects are investigated (see figure 7): surface roughness, defects in general or with emphasis on porosity, deformation, melt track geometry and powder properties presenting the main interests. To predict surface roughness, Rao et al [88] generated a massive amount of time series data collected by multiple sensors attached to a MEX/P machine. A Bayesian DP model is trained und utilized for the prediction. Focusing on the AI model rather than the data, Wu et al [98] as well as Li et al [80] tested various AI models to compare the performance in surface roughness prediction. No significant difference in the accuracy of the models was found, highlighting the importance of the training data for reasonable results. Unlike other studies, Liu et al [82] used image data instead of time series data for the surface roughness prediction by a CNN. That way, they achieved real-time surface roughness prediction. Khorasani et al [79] applied an artificial NN (ANN) to model the influence of post-processing techniques such as heat treatment and machining on the final surface roughness of parts manufactured by PBF-LB/M, rather than aiming for a surface roughness prediction prior or during the manufacturing process. Different from the studies focusing on surface roughness prediction, publications addressing the challenge of porosity prediction mainly use image data rather than time series data. Seifi et al [89] aimed at predicting the size and category of pores by interpreting in-situ thermal history monitoring in DED with the help of two different AI methods, namely random-forest (RF) and a SVM. How the models interact or split tasks is not explained in detail. Williams et al [96], Wasmer et al [94] as well as Gaja et al [73] used acoustic monitoring in PBF-LB/M and DED, respectively. They utilized CNNs to classify the acoustic signal, and since CNNs perform better on image data than time series data, they converted the acoustic signal into spectrograms. In a subsequent study, Wasmer et al [95] apply RL to train their model in real-time. Another approach to predict porosity independent of the respective AM process is the classification of melt pool CT-images [75,77].
While porosity is only one type of defect, other studies aim for a more general approach on defects including cracks, pores, and delamination, depending on the investigated AM process. Most publications focus on the defect detection and classification during the manufacturing process, only Cui et al [69] deal with images captured after the DED manufacturing in order to make a rough statement regarding the part quality. Liu et al [83] derived their concept for operation during the manufacturing, demonstrated their achievements with the help of images taken after manufacturing, though. Furthermore, the majority of approaches utilize imaging, either 2D images obtained by some kind of camera [69,70,76,80] or 3D images obtained by radar [72] or laser scanning [83]. Liu et al [83] even combine both types of images. Accordingly, those works apply NNs to process the images, as they are well suited for this task. Different from this approach, Bacha et al [67] integrated various sensors into their MEX/P setup, monitoring the system's performance parameters such as axis speed or temperatures. This time series data is then interpreted by a BN. Gaja and Liuo [73] pick up acoustic emissions and feed it into an ANN for defect detection in DED. Last, Okaro et al [86] use a photo-diode to find anomalies in the melt pool during PBF-LB/M and aim to link this data with the resulting part quality in terms of ultimate tensile strength (UTS) with the help of a GM model. This approach also differs from the others mentioned above in the goal of assessing the overall part quality rather than detecting and identifying single respective defects.
One important criterion in terms of part quality is the strength of the produced part. While Okaro et al [86] use the UTS to be able to label their data correctly, Vijayaraghvan et al [93] and Yadav et al [97] focus on predicting this property for MEX/P parts. Here, Yadav et al apply a GA to predict part strength with the process parameters as given input. Vijayaraghvan et al, similar to Wu et al [98] and Li et al [80], examined the performance of several AI models in order to determine the wear strength of a MEX/P part, and build an improved model out-performing the standard models investigated prior.
Besides the possible defects within an AM part the dimensional accuracy is a critical factor in part quality. Since it is possible to meet this issue either by adjusting the process parameters according to the part geometry or the part geometry itself, e.g. by pre-deforming [90], the focus is with the deformation prediction prior to the actual manufacturing. Shen et al [90] applied a DNN to predict the deformation based on the CAD file and a second DNN to compensate the calculated deformation within the CAD design in the context of DLP AM. Deswal et al [71] as well as Khanzadeh et al [78] aimed at predicting the deformation based on the chosen process parameters in MEX/P. While Khandazeh et al used a SOM for this task, Deswal et al compared different models and found the ANN to be the most accurate. They later coupled the ANN with a GA to optimize process parameters for a given CAD design. Papazetis and Vosniakos [87] incorporated an ANN not only for deformation prediction in MEX/P, but for other defects as well. Instead of CAD data and/or process parameters, they took 3D images via laser scanning after manufacturing to classify a part as faulty. A similar approach is taken by Tootooni et al [92], who aimed at reducing the amount of data necessary when using 3D imaging to the bare necessity by only measuring sparse samples while training with the full measurements. That way, the effort for quality assurance is significantly reduced. They also evaluate various ML methods.
In powder bed processes such as PBF-LB or PBF-EB, the powder properties are essential to a successful manufacturing process with high part quality. DeCost et al [65] combined computer vision and ML methods to characterize the 'microstructural fingerprint' to evaluate powder properties such as particle size, shape, and surface texture. Desai and Higgs [74] tended to the powder spreading process by developing a spreading process map by coupling a discrete element method (DEM) model and a NN.
In order to be able to identify irregularities during the manufacturing process rather than assessing the part quality itself, Li et al [81] and the group of Yuan et al [99,100] dealt with the melt track in LMFRC and PBF-LB/M, respectively. Li et al trained their NN with cross section images and linked the process parameters with the corresponding melt track cross section geometry. Yuan et al used video imaging to record the manufacturing process in combination with a CNN to detect anomalies in the manufacturing process.
Last, there are a few studies on quality aspects not touched by other works that are part of this review. Amini and Chang [66] developed a ML method they named multi-layer classifier for process monitoring to be able to classify a given PBF-LB/M produced part as either defective or of good quality based on sensor data obtained during the manufacturing process. Which aspects of part quality are addressed in this classification is not specified, though. Mohajernia et al [85] used an ANN as a substitute for finite element analysis to be able to predict residual stresses of parts processed by DED given the process parameters. Different from most studies they reported an unsatisfactory accuracy of the model, indicating further research needs.
Stanisavljevic et al [91] investigated external disturbances in MEX/P by applying different types of vibrations. They then trained a framework including various ML methods to identify and classify those vibrations with the help of sensor data from the manufacturing system. Zhang [101] utilized an ANN to interpret images obtained through Fourier-transform infrared spectroscopy to classify parts manufactured by MEX/P with regard to the corresponding thermal degradation.
Banadaki's work [68] is conceptual, presenting an approach on a smart factory where distributed ML connects multiple manufacturing systems of the same type. That way, the potential of an Industry 4.0 environment, such as security, reliability, scalability, better product performance and cost-efficiency, are to be exploited. With this concept, Banadaki is nearing the concept of a DT, but since no direct interaction between the ML methods and the manufacturing systems is planned for, it remains within the realm of a DS.

Discussion of the current application of AI in AM
During the systematic research carried out, it became evident that AI is applied in almost any AM process stage. The main field of application has been identified as the quality assurance during or after the manufacturing. This finding occurred already during the screening process, which is why the final review stage deals with this topic only. Quality assurance itself is in need of data science methods such as AI because of various reasons: on the one hand, in order to achieve broad industrial application of AM aside from prototyping, quality assurance plays a crucial role to be able to certify and license AM products. On the other hand, especially within the more complex AM processes, there is a vast amount of influencing factors. Even if it is possible to monitor all those factors, the sheer amount of data is difficult to process. Furthermore, as many factors are interrelated, common statistical methods are prone to failure. AI provides a solution to those issues.
Within the field of quality assurance, the majority of the studies approached the detection and classification of defects, and porosity in particular (15/38 and 6/38 studies, respectively). Next areas of interest are the surface roughness (6/38 studies) as well as part deformation (5/38). This ranking may be attributed to several circumstances: first, while deformation in metal can to some extend be prevented by conventional stress-relief heat-treatment after manufacturing, and the field of surface finishing offers a variety of processes to achieve the surface conditions required, the repair of defects such as pores and cracks is more complicated and costly. Hot isostatic pressing (HIP) is able to reduce inner porosity and homogenize parts with lack of fusion between layers, but is not able to deal with defects connected to the part surface [102]. Furthermore, the HIP process is characterized by a long process cycle due to the required heating/cooling and (un)pressurizing. To repair cracks on the parts surface, laser re-melting is a possible solution [103], especially when laser-based AM processes are involved. The effort for the process preparation is high though, and the re-melting does alter the microstructure on the surface, which may not be desired. Therefore, it is of great importance to prevent any defects or to be able to detect faulty parts.
The surface roughness is a quite diverse topic: in some applications a high surface roughness leads to critical failure, e.g. because the surface profile acts as crack initiator reducing fatigue resistance [104]. Other technical applications benefit from a high surface roughness, e.g. by decreasing gravitational settling of dispersed particles in horizontal channel flow [105] or by improving the migration of osteogenic cells into the material within bone tissue engineering [106]. As the effect of surface roughness on fatigue resistance and dimensional tolerances poses a challenge to most technical products, there are great on-going efforts in research to minimize and control surface roughness [107,108]. Along with these efforts, the need for surface roughness prediction prior and in the manufacturing process is increasing to be able to adjust and optimize process parameters.
As many AM processes require heat to fuse adjacent layers, residual stress due to the high thermal gradients may be introduced to the part during manufacturing [109]. If the residual stresses are too high, deformation of the part occurs since the material aims at releasing the stress. Even though there are counter-measures to residual stresses such as pre-heating of the build platforms or stress-relief heat-treatment after manufacturing, stress-induced deformation is still a technology characteristic to be considered.
Even though the quality challenges addressed by studies incorporating AI focus on various aspects and scales, they are quite similar regarding the type of data processed or the overall task to perform: most studies include either time-series data from sensors or image data obtained by some kind of camera, with the main task of the AI being the interpretation of the respective data and/or prediction of a certain part property. CAD or other kinds of data are rarely involved. This is in accordance with role AI is assigned by studies on DT.
The studies evaluated in this review show significant similarities in the choice of AI method, also. Most studies apply a NN (26/38), and SVM (7/38) as well as random forest method (5/38) are used in various works, too. The studies that compare different AI methods found NNs to be the most accurate if any difference in performance was noticed, matching the preference identified in this review.
In regard of answering RQ 2, it is concluded that no study evaluated in this review is directly considering the use of their work in the context of a DT. Okaro et al [86] discuss the necessity of feedback control for adaptive PBF-LB/M, but do not include this aspect in their model. The work of Amini et al [66] is closest to a DT concept as they developed a framework for process monitoring where multiple manufacturing systems producing the same part provide the required data. They therefore already think in a larger context, but as their approach aims at the part quality prediction without giving their framework the possibility to adjust the on-going manufacturing process, it still cannot be categorized as DT according to the definition given in the introduction.
While almost every work investigated here claims to have achieved reasonable results, understanding and reproducing this finding is not possible. There are two main issues hindering the transparency of the studies: first, as there is no standard regarding the report of the AI training, it is not always easy to identify the necessary information of the data type and number training as well as testing data sets. Sometimes the respective information remains completely unclear. This indicates an essential need for standards or at least common agreements within the community on how such technical information necessary for classification and reproduction of a published work are provided in the sense of good scientific practice. A first step could be the implementation of an overview table listing this information, similar to the way material properties and process parameters are reported in experimental or numerical studies in the manufacturing field.
Second, the number of training data sets is varying significantly from the low double-digit range (e.g. [71,72]) up to several thousand [69]. The studies with a lower number of trainings data sets seem to produce as reasonable results as the studies with high amount, which raises questions about the general minimum amount of data needed for accurate AI methods on the one hand and demonstrates on the other hand, that problems with varying complexity require a varying number of training data sets. Additionally, it is noted that almost every study relied on experimental data, except for e.g. Mohajernia et al [85], who use numerically generated data to train their ANN. Numerical training data provides the possibility to generate a great amount of training data within relatively short time after the numerical model has been set up and validated, and is therefore able to enhance the training of AI significantly. While the potential is high, it has to be kept in mind, though, that for a successful numerical modeling the underlying mechanisms and phenomena have to be known.

Conclusion
To investigate the current state of the art and progress of the implementation of the DT in AM, a systematic review has been performed, with 12 relevant publications remaining after the screening procedure. It became evident that even though there is an increasing number of studies taking up the idea of the DT, most works are still at the level of a DS. The two works clearly classified as a DT present theoretical concepts. In conclusion, while the technical innovations enable to collect and interpret an increasing amount of data, the step of implementing a two-way communication where a DT framework is capable of directly adjusting the on-going process based on the predictions and identified possible dangers to the manufacturing is still in a conceptual phase and has not been put to the test yet. On the way to achieve the goal of implementing a complete DT in AM, three main challenges have been identified through the review, under the assumption of various research groups having to work together in order to create a DT: first, the is a huge need in standardization and consistent use of terminology. Second, the development of general frameworks and corresponding physical and virtual interfaces are required to be able to connect the different systems. Here, especially the practical implementation of those is of high interest as the current state of the art is at a conceptual phase. This includes the data acquisition, processing, and visualization of the already available sensor technologies, as well as the implementation of predictive AI models to be able to assess the complete product development and manufacturing in real-time. Last, current models of the production processes are either too complex for state-of-the-art computational resources to achieve real-time prediction and multi-scale modeling, or lack accuracy and reliability.
During the systematic review, the importance of the application of AI to operate on the data collected by the physical systems has been highlighted repeatedly. In consequence, a second systematic review on the current application of AI in AM has been carried out, with emphasis on the aspect of quality assurance. After the screening procedure, 38 studies of relevance have been analyzed. It was found that mainly the challenges of microstructural defects, surface roughness, and part deformation are addressed. Furthermore, an overwhelming focus on the application of NNs has been identified, pointing out the suitability of this AI method. Although the publications on DT emphasize the need for AI methods, the AI studies are not directly designed for the integration in such a larger context. Last, two issues with the reporting of training data, namely the type of data and the amount of data sets have been identified and first steps towards a solution of those issues have been proposed.

Data availability statement
All data that support the findings of this study are included within the article (and any supplementary files).