Applying stochastic simulation to study defect formation in EUV photoresists

Extreme ultra-violet lithography lithography resolves features below 11 nm. However, photonic and atomic variations at these photon energies and dimensions lead to less than 1:109 potential stochastic defects causing device failures in stable manufacturing processes. This study investigates a methodology intended to identify root causes of stochastic defects with potential mitigation paths. Simulation techniques using pseudo random numbers are used to identify failing photonic and chemical event or distribution combinations. Failing combinations occurring in many photon-chemical configurations are thought to have potential mitigation methodologies. Photonic effects demonstrated significant impacts on stochastic defect formation with approximately 73% of the photon seeds resulting in a failure in at least 60% of the trials. The material results were mixed with large failure quantities that demonstrated low impacts. The photonic shot noise based failures were dominating in this study and these failures will not be mitigated by material enhancement alone.


Introduction
Stochastic wafer defectivity is a driving concern in extreme ultra-violet lithography (EUV) that is found at high priority on industry road maps and challenges researchers at the cutting edge of semiconductor manufacturing. A stochastic defect may be defined as the presence (or absence) of photoresist from a wafer pattern where it is predicted by a continuous model under nominal conditions. The effect is highly localized. Bridged trenches and broken lines are examples of such an effect. Stochastic defects do not include particle adders, defects on the mask, etc. This type of defectivity is caused by the inherent statistical nature of the EUV exposure process and the chemical response of the photo resist. Further, the defect occurs under nominal process conditions without external interference such as a particle or wetting agent. Stochastic defect occurrence is amplified when multiple extreme events in the resist and/or photon systems occur at the same time and position.
There have been many literature studies 1,2) of stochastic defectivity in laboratory EUV exposure processes. These studies were critical to provide understanding, and a modeling foundation for this defectivity type. 3) However, there are many stochastic processes that occur during EUV exposure. The initial photon arrival is statistical in nature, 4) as is secondary electron generation, 5) subsequent absorption, and on and on. 6) Experimental studies have provided the foundation for stochastic process modeling, but they cannot readily examine the individual mechanisms leading to a defect.
In this paper we will use many stochastic simulations of the same process to present a technique to study the individual stochastic mechanisms in an EUV exposure process. Our stochastic simulations make use of Monte Carlo techniques for each unique mechanism in the exposure.
Using the proposed analysis allows one to attribute a statistical likelihood of a defect being caused by a specific event.

Stochastic defect models
Stochastic defects are random catastrophic events at nominal process conditions that occur at very low orders of magnitude such as 1:10 9 or lower. As this is a computational study, only defects arising from the stochastic processes that are part of the model can be investigated. Defects arising from material impurities and unmodelled process variations, such as light source variations, pattern position on the wafer, are not part of this simulation study and thus do not contribute to the results. This constrains the types of stochastic defects to those that have no origin external to the nominal patterning process.
Stochastic defects occur when a process is functioning within its typical operating conditions such as focus, dose, material age, homogeneity, temperature, etc. Stochastic defects are a function of CD for a specific pitch. This failure frequency as a function of CD was first described by De Bisschop et al. as a Stochastic Failure Window. 1) In dense grating structures, bridging defect rates increase as spaces reduce and line widths increase. Line break defect rates increase as spaces increase and line widths decrease. The Stochastic Failure Window is comprised of a bridging failure trend line, a pinching failure trend line and a floor. The two trend line slopes tend to be slightly different. The origins of the defect floor are not clear and may be process or material related.
Stochastic defect models may be split into two classifications. The first uses Monte Carlo techniques to solve the differential equations governing the lithography process in a stochastic nature. [7][8][9] The second uses a continuum (nonstochastic) model to create some quantity which correlates with wafer defectivity. The second method is fast enough to be applied at a full-chip scale but does not have the same accuracy as the Monte Carlo method. 10) Moreover, Monte Carlo techniques allow one to investigate the contributions of individual process components. For this study, we will use a model which utilizes Monte Carlo techniques.
Stochastic data are generated in computer simulations using pseudo random numbers. Pseudo random number are calculated as a sequence of numbers that appear to be random and pass typical statistical tests but are generated through a deterministic algorithm. The sequence is characterized through an initialization seed of the pseudo random number generator. A given seed will always generate the same sequence of random numbers. This leads to random but reproducible simulation results. The random number generation algorithm used in this study is the Mersenne Twister as provided by the C++11 standard library. 11) Every stochastic subprocess of the simulation, such as acid generation or species distribution, is initialized with its own seed allowing independent but reproducible randomization. The random sequence is next used to place a subprocess, such as acid activation, into a position within a distribution. The value from the distribution is then used in the calculation for the subprocess. We use the seed number to identify a randomly generated data set in this study.

Defect investigation methodology
It is well known that photon arrival times are governed by a Poisson distribution. 12) However, this is only the first step in a series of statistical physical processes during EUV exposure. These processes can be classified as either photonic events or chemical events. In our model we use the following photonic events: photon arrival (shot noise), secondary electron generation, photon/electron loss. In addition, we use the following chemical events: acid generation, quencher generation, inhibitor deprotection, acid neutralization. Each one of these events (whether photonic or chemical in nature) are driven by their own probability distributions in our model. Therefore, they can be independently controlled and studied. Naturally, this leads to the question: do any of these processes dominate when defects are formed?
In previous work, 13) we studied individual stochastic defects to determine the root cause of a defect. Figure 1 is an example stochastic defect discovered to be generated by (Color online) A simulated stochastic bridging failure caused by quencher distribution, secondary electron distribution, and inhibitor distribution. Elevated quencher and elevated photo generated quencher from the secondary electrons removed acid which failed to remove enough inhibitor to allow resist removal from the space during develop. This is a specific case but identifying multiple instances of the same case is more interesting.  1) for the simulated resist system used in this study. The defectivity was artificially increased to produce defects for study in an approximately 100 nm by 100 nm region. The minimum detectable defect rate is 1:20 000. All points below the yellow dotted line do not exhibit defects. elevated quencher distribution. This methodology proved informative but is not scalable due to the number of simulations required. Still, we desire to study stochastic defectivity throughout the entire resist system. Stochastic defects on standard process near the center of acceptable process window are rare. The rarer a defect, the greater the number of stochastic Monte-Carlo simulation runs needed to emulate it with high confidence in a model. 14) On average, a three standard deviation events are observed every 370 observations. This number of observations may result in an observation of the rare event, but its probability is still unknown. To describe the rarity of a three standard deviation event with the acceptable confidence, at least 10× as many observations (or at least 10 observed rare events) are required. For a four standard deviation event, this would require at least 150 000 observations per experimental treatment combination and 17.5 million for a five standard deviation event.
Observations of rare events can be thought of as Bernoulli trial if we count the observation of defect as a success. If trial runs are independent than we can see it as a binomial experiment. The binomial distribution Poisson distribution when n is large and p is small. For a three standard deviation event, = p 0.00027 which gives the standard deviation of approximately 5.2%. This means when we observed 10 defects, chances of observing defects less than 9 or greater than 11 with the same underlying distribution is small. This gives us a high confidence that the observed events are a real phenomenon. This means to characterize defects that appear with the odd of 1 in 370 one needs 3700 simulations. Extending this methodology to understand the contribution of various stochastic process, one needs to understand the correlation with other stochastic effect. This requires the same number of simulations. In our model there are seven stochastic components. To characterize them all, 7.5 million simulations are required. We are interested in finding the causes of the rare defects. Similar calculation for five standard deviation events would requires 750 million simulations and six standard deviation events would need 250 billion simulations.

Process model
This study uses a simulated ASML EXE:5000 scanner system 15) with a circular obscuration exposing an emulated stochastic resist with an artificially increased defect count. The mask also introduces a stochastic component through mask ripple. This section will overview these concepts. The relevant process parameters are summarized in Table I. Further details, especially of the resist modifications, are available in previously published works. 16) The resist model used in this study is referred to as the suppressed EUV chemically amplified resist model, 16) or the suppressed model. The parameters in this model are set to image 22 nm features using 0.55 NA illumination. To facilitate the study of defectivity, the suppressed resist defect is approximately 10 6 times greater than a minimally acceptable 10 defects per cm 2 . This increased defectivity rate produces less than 2:20 000 defects in the stochastic target case for an approximately 100 nm by 100 nm simulated region with no other perturbations. The suppressed model   Represents where photons interact to generate secondary electrons and how the secondary electrons diffuse through the resist defect rate produces a defect response that can be studied using 10 5 simulations as opposed to the 10 9 -10 11 necessary for a production resist with a defectivity rate of 1 defect per square millimeter or less. A stochastic failure window for the suppressed resist exposed with a simulated ripple in the mask is shown in Fig. 2 to quantify the suppressed resist defectivity response. The work in this study is targeted at 11 nm, which is an elevated defect region. This model is useful for trend comparison and defectivity study. It is not well corelated to exact wafer CDs.
The mask in this study uses 60 nm 4X TaBN/TaBON based absorbers. The molybdenum and silicon multilayer includes two MoSi 2 diffusion layers. 17) In addition, mask ripple is introduced into the mask as shown in Fig. 3. The mask ripple is a 12.5 rms 1× roughness 5 nm 1X correlation length on the mask blank. The multilayer reflector is mapped conformally onto the rough surface generating the ripples seen in Fig. 3.
The features in this study are at 22 nm pitch with 14 nm lines and 8 nm spaces, all values wafer scale. The wafer pattern is targeted to 11 nm lines and 11 nm spaces at wafer scale. This configuration is used to increase defect sensitivity.  The model uses EXE:5000 settings as depicted in Fig. 4. The model is at 0.55 NA, with 4× 8× magnification, a 0.21 circular obscuration, maximum addressable sigma of the illuminator 1, and leaf illuminator D = 1.1157. These setting produce NILS values that range from approximately 1.8 to larger than 6 depending on measurement location due to the mask roughness as well as stochastic settings.
NOTE: The large dose is due to the desire to have low defectivity with the combination of very thin spin on height, the negative biased mask space, and the chosen Dill C. The Dill C is selected from a current EUV resist. 18) A Dill C most likely needs be higher in a real 0.55 NA EUV resist system, which would lower the dose.

Stochastic simulation
The suppressed resist in this model is simulated stochastically to understand the influence of different stochastic processes. In a stable system, stochastic processes cause a distribution with a mean close to the target and a variance within manufacturing tolerances. In rare cases the variation caused by stochastic processes can interact to form a defect. In a laboratory setting, it is impossible to perfectly replicate the conditions which caused a particular failure. In our simulation this is accomplished by selecting which random seeds to vary and which to keep constant. Identifying commonality in these rare interactions is the purpose of this study.
The stochastic parameters used in this study are described in Table II. Each of these parameters is potentially a Parameter of Interest (PoI). A PoI is a parameter or group of parameters under investigation to identify potential impact on rare defect generation. In Table II, a post zero time location is an incremental time step after zero, which is the instance the simulation begins. For this study, that location is 10 -6 s.
Stochastic simulations create specific instances of these distributions which are identified by the seed number. The seed number for a parameter identifies a specific realization for the stochastic subprocess in each case. Therefore, the same seed values must be used throughout the study in all cases to allow proper comparison.
The simulation area contained nine space measurements as seen in Fig. 5(a). The center measurement point is used for most of the investigation in this study while the remaining eight measurements are used for CD targeting and CD validation.

Study design
The procedure used in this study consists of three major steps with a number of complex component steps in each major step to achieve the output. The major steps include computing and analyzing a stochastic model, generating a standard failure set, and tabulating seeds that produced a stochastic defect. These are referred to as failed seeds. The full flow is show in the flow chart in Fig. 6. 3.3.1. Stochastic model analysis. The first step in the analysis is to perform a complete stochastic analysis on the desired model. In this step, the stochastic component variation is synchronized such that all components use the same random seed. The seed is any user specified unique positive integer. These seeds are then varied N times, where N was 20 001 in this study. The same seed for each of the N seeds must be used in each of the trials throughout the entire study.
The simulated seeds are split into a group where a defect was found (failed seeds) and a group where there was no defect identified (good seeds). In this study, the good list was approximately 400× larger than the failed list. However, in a resist system where defectivity is not artificially boosted, the proportion should be much lower. 3.3.2. Standard failure set. The standard failure set is a list of failed seeds that produce defects when seeds from the good seed group are perturbed in one specific case. The specific case is the PoI and may include more than one case  set to the same random seed perturbation. The result of this effort is the standard failure set which is used to investigate the impact of each stochastic failure mechanism. The first step to generate the standard failure set is to choose several good seeds from the stochastic model analysis run. In this study, four good seeds were chosen at random. The next steps will be performed on each of the good seeds.
Set the stochastic parameters to a good seed value. Then set the PoI to vary through the N chosen random seed perturbations. This study used 20 001 perturbations for N in steps of 1000.
After a good seed stochastic run is completed, the failed PoI seeds are gathered into a list. This is continued until the good seed list is completed. After all good seeds are analyzed, the failed seeds are placed into a table such as  Table III. Then the failed PoI seeds that failed for each good seed are identified. Each PoI failed seed must fail for at least one good seed or it will not be in the list. Some PoI seeds may fail for more than one good seed. The completed list of PoI failed seeds as well as the number of good seed runs during which the PoI seed failed is the Standard Failure Set.
The seeds in the Standard Failure Set caused a defect when they perturbed known good stochastic configurations as identified by good seeds. The working hypothesis driving this method is that failed seeds that produce defects frequently are stochastic cases which may have a root cause that can be identified and possibly addressed in some manner. 3.3.3. Failing seeds. The failing seeds are investigated in a similar manner to the good seeds. The difference is in the tabulation against the Standard Failure Set. The same N random numbers are used for the failing seeds as were used for the good seeds and the initial model analysis. In this study, approximately 120 failing seeds were found in the model analysis phase and 15 of those seeds were investigated.
The bad seed branch in Fig. 6 is followed for each of the failing seeds resulting in a list of PoI failed seeds for each failing seed. Once the PoI failed seeds are identified, they are placed in the Standard Failure table. However, in the failing seed case, only PoI failing seeds are counted that are already in the Standard Failure table. Seeds not listed in the Standard Failure table are ignored. Table IV is a mockup of a portion of a completed table In  the table, if a PoI Failed seed found from the good seed process appears in a Failing seed process, that seed is then accounted for in the table with a 1. The number of times a PoI seed is found is totaled across the row to produce the Fails value. The Fails value will always be at least one from the good seed that the PoI Failing seed is identified.

Discussion
The methodology presented in this study was employed to study a subset of a stochastic process using the procedure described previously. The specific PoIs are listed in Table V. Both PAG and Quencher distributions include activation and PEB kinetics in their PoI to account for the complete reaction during the PEB process. The failure data gathered from the Standard Failure Set analysis of the 15 good seeds are binned and plotted in Fig. 7. The secondary electron distribution demonstrates an observably different distribution from the two chemical processes. The PAG distribution produces the most failures, followed by the secondary electron distribution, and the quencher distribution in a ratio of approximately 1:2:12 respectively.
The seed data was analyzed with potentially interesting points presented in Table VI. The first observation is that while secondary electrons produce 50% of the number of failing seed matches as the PAG distribution PoI, the secondary electrons only have 20% the number of seeds in the Standard Failure Set as the PAG Distribution POI. The impact of this observation is observed in the shift of the secondary electrons to bin values that primarily 11 or larger.
Two other metrics are presented to better visualize the impact of the individual failing seeds. The first metric in Table VI, 60% failure case, is the quantity and proportion of seeds that fail in at least 60% of the 19 cases. This metric shows that each failing secondary electron seed impacts the failure rate in a more significant manner than the two chemical rates with approximately a 10× greater impact over the quencher distribution and an even greater impact over the PAG distribution.
A second method to understand a failing seeds impact is to quantify the number of runs to necessary for a portion of the seeds to fail. In this study, the proportion of runs need for 80% of the failures to occur was calculated and is presented in the last column of Table VI. This demonstrates similar values for quencher distribution and PAG distribution, but 80% of the seeds fail in more 3X as many secondary electron seed runs as in the two distribution runs.
A single secondary electron failed seed was analyzed and the results are in Fig. 8. The plots in Fig. 8 depict the concentration of secondary electrons in a simulated region. The four instances in Fig. 8 come from the same failed PoI secondary electron seed but different failed seed settings for the remaining seeds. The seed chosen for this analysis failed in 14 of the 19 runs.
The h and j instances were from the 5 cases that produced a defect free space while the i and k instances were from the 14 instances that produced a bridged space. Instance h shows an observably larger secondary electron value than observed in the other 3 instances. This is an interesting observation because instance j also produced a defect free space. There are several potential reasons instance j appears to have low secondary electrons but still produces a clear space. A first potential explanation is that the observed area is a single XZ plane and adjacent XZ planes provide enough secondary electrons to produce a clear trench. Another possible explanation is that more secondary electrons in the j produce reactions than the in the i and k cases. An additional explanation is that chemical effects such as acid diffusion has a longer range in the j case compared to the i and k cases. There are many other potential explanations. This observation indicates analysis needs to include three dimensional and time components to produce a complete understanding. It is possible that with this complete understanding, case j may provide insight as to why some of these stochastic defects formed and perhaps allow for a method to eliminate them.
A similar analysis is shown for acid distribution from the PAG PoI in Fig. 9. In Fig. 9, the x and z instances are from the 12 and 13 bins respectively and simulated with failing seed sets that result in bridged spaces. The w and y cases are from bins 6 and 1 respectively and were simulated with failing seed sets that result in clear spaces. Both w and y show larger acid concentrations than x and z, which leads to the failure mechanism understanding for these cases relating to low acid.
Even though the observations show differences in acid between the w and y cases and the x and z cases, the PAG distribution PoI may not be the complete answer. It is possible quencher, inhibitor or another component must interact to form the bridged space. To fully analyze the case, three dimensional and time based analysis is necessary.
This output is similar to the previous slide, except it is for acid after the first time step. In this case, w and y both show clear spaces with high acid intensities. W is more concentrated than y but the intensity is clearly large. X and z generate defects and have much lower acid concentrations than w and y. X also has limited acid distribution. The failure mechanism here is clear, so there may not be much to learn other than to improve acid generation homogeneity, which is already at the state-of-the-art limit making this a difficult task.

Conclusions
Very large number of stochastic simulations are needed to understand rare defects. Here we demonstrated a novel technique that breaks down the impact of stochastic components in stochastic lithography simulations by looking at defect formation based on the interaction of a single "bad" stochastic component in a known "good" stochastic system.
This approach leads to several conclusions. Failures that are mainly caused by the shot noise turned out to be very dominating and even when shuffling all subsequent Instances w and y are from the 6 and 1 bins respectively and are simulated with seeds to produce a clear space. Instances x and z are from the 12 and 13 bins respectively and are interacted with seeds to produce failing spaces. In each instance, the top rectangular view is a top down view of the XY plane at the resist surface while the bottom view is a cross section view of the XZ plane at the measurement location. simulation steps the defect will remain. Some of these instances caused very high defect occurrence frequencies of larger than 60%. From that we conclude that there must be a minimal defectivity floor caused entirely by the shot noise that cannot be mitigated by material enhancements.
However, there are more total defects caused by the process steps in the material than shot noise defects. In our model approximately 2/3 of the failures are accounted to the material whereas the remaining 1/3 is accounted for by photonic effects. The defects caused mainly by the material did not include instances that individually produced high defects quantities. As soon as the starting conditions of all the other processes were shuffled, the defects did not persist. This indicates that despite a potential photonic defectivity floor, material enhancements still play a major role. It is worth noting that not all material stochastics were examined in this study and an analysis of all stochastic process variations will give a more detailed picture of the impact of the material on the overall defectivity.
Finally, better metrics need to be developed to help study these defects. Concentration intensity plots only offer a slice of information about the defect creation process for analysis, while the issue is a three-dimensional problem spatially and has a time and path component.
This method is generally applicable to stochastic process models such as Belete's stochastic organometallic photoresist model. 19) The meaning of the seeds is dependent upon the stochastics steps that are part of the modeled process. Therefore, different resist types with different stochastic processes and mechanisms may have different results than presented here.