Choose Language Hide Translation Bar

Using Augment Design to Resolve a DOE Blocking Mistake

DOE plays an important role in AMAT End-End continuous process, design improvement and optimization. Unfortunately, most DOE practitioners may randomly pick any DOE design with JMP that they are familiar with while lacking a DOE statistics foundation, resulting in poor predictive models. As one of the most powerful Resolution IV algorithms, DSD allows us to study both main and interaction effects of a large number of predictors in a relatively small DOE run size. However, DSD cannot tolerate any Orthogonality Violations such as GRR Noise, SPC Time Noise, Design Constraint or Recursive Stepwise Algorithm.

We studied a special case regarding DSD Blocking Design. Instead of assigning the two operation systems as Predictor factor, Blocking factor has been assigned. The DOE experimental operators did not pay full attention and changed the run order, not following the Blocking plan.  After completing the first half of nine runs (1st Blocking), the design process owner found Blocking mistakes and stopped the DOE runs immediately, which induced poor design evaluation.  After discussion, we found five alternative resolutions to improve the Orthogonal design structure. We conducted detailed Design Diagnostics on each alternative DOE proposal considering the DOE schedule and cost constraints. Among the five proposals, Augment DOE, adding nine new Augment points (all at corners) is the best to recover the most Orthogonal risks at the highest Return of Investment Ratio.  Through this DOE Blocking case study, we have further upgraded our JMP DOE knowledge with effective communication through this crisis.

 

 

Good afternoon, everybody. My name is Yi Zheng. I am working for Applied Material as a Semiconductor Process Engineer, majoring in 3D Process. It's my honor to speak here, and I'm so glad to share my topic with you. Then today, I'd like to talk about the study on how to use augment design of JMP to resolve DOE blocking mistake. Firstly, I would like to express my gratitude to my mentors, Jiaping and Charles, for teaching me how to use JMP to do data analysis and for assisting me in completing this project.

Now, let's start from the project background. As an engineer, I usually need to design DOE to help me with process tuning for my FAB customer. One day, when I needed to consider impact of time on process results in DOE, I added two blocks into design, just like the table on the right show. Of course, I have normalized all of data since my company IP check. There are totally six Factors used. I have tried to create 17 runs DSD design, which take two blocks into account, day one and day two, just like this.

However, something occurred. I cannot achieve the second block anymore. Now the question is, what should we do next? I don't want to waste the data I've done. Therefore, my plan is to study if I can use augment design to remedy the model. I will take a quick introduction on the DOE workflow score card, which can be a guidance for DOE conduct. As shown in the table.

It starts from DOE scope and objective, which means to find all responses and Factors, and then to build a near orthogonal design. Now, we stop there due to a constraint case, original DSD structure has been broken. We had to build a new structure before next step. It's adding augment design that I mentioned before.

Thirdly, we need to evaluate the design diagnostics, and based on the evaluation, we can determine DOE data collection plan. From number 5-8, it's about DOE result analysis and model improvement after data collection. We won't discuss on them today.

Now, let's move to Evaluation Metric. How to assess our new design structure? I provide a criteria. Firstly, it's power analysis which can demonstrate the models' ability to judge the positive and negative trending of Factors. Its mean value should be greater than 0.75. The second one is Prediction Variance. It will be good if less than 0.05, which means good prediction accuracy.

Then the D-Efficiency, design efficiency, can show the originality of DOE structure. It should be more than 0.8 or 0.75. The fourth point is to use a multivariate function to see the scatter plot matrix. The design point need to fill the matrix as much as they can, especially for center and a corner space. It's also necessary to analyze the confounding risk. Here is a metric.

Firstly, download the confounding correlations table. We get the correlation between each two Factor and then remove the diagonal components so that we can calculate the confounding risks of two area. The average and the maximum value are suggested to be less than 0.3. In terms of the metric before, we start evaluation from our original design.

Remember, total 17 runs of design. Block 2, eight points cannot be achieved. We can just evaluate nine remained points. Unfortunately, it's failed. As you can see, X5 and X6 has been missed, and the evaluation alerts. I think it's because the sample size of DSD cell is very small. Missing several samples can lead to serious broken to the DOE model structure, thereby affecting JMPs' evaluation of it. We cannot trust the design evaluation since error happened.

Now, let's go back to the DOE workflow table. We can't continue step 3 since it's hard to build a good DOE structure based on remained nine points. We need to go through step 8 to add augment design, then we do the design evaluation. Now, I will try to resolve the case. There are five alternative option. Design 1, we mentioned before, it's end with the original block 1, nine points completed, but evaluation failed. Design 2 is to redo original 17 runs of DSD. It's regarded as a baseline group. The design 3, 4, 5 are going to add augment points, Fold-Over points, and Space Filling points.

As for the quantity, adding 8 and 17 points for augment and Space Filling could be better so that the test will be comparable with baseline group. When we judge which design is suitable, we have to consider the cost. Here is a sample size. As we know, in our customer FAB, we found resource is always limited. Besides the in production fabric, it's often hard to have enough tool time to conduct a large number of experiments. Therefore, the cost will be a key point in assessment.

Here is the design tool, a standard DSD with 17 runs as a baseline group. We can get excellent Factor Power, Prediction Variance, and the D-Efficiency. From the color map, on correlation, there is no resolution two and three risk, since the DSD points can always build a perfect orthogonal structure. However, if we selected this option, it will cost at least seven pieces of wafers.

Let's move to design three by adding augment nine points. It is found that greater power value, 0.95, which is close to one, means have more confidence to judge positive or negative Factor coefficients. Meanwhile, we obtain better Prediction Variance and the D-Efficiency. But as for resolution risk, it seems to be worse because our original orthogonal DSD structure has been broken. Confounding effect cannot be avoided perfectly, especially for the point located in X4 versus X1 and X5 reach the maximum value. In the further model analysis, we should seriously consider the interaction effect at X4 versus X1 and X5.

This slide shows 17 runs augment points. With more additional points, we can obtain better Factor coefficient power, Prediction Variance, and D-Efficiency. However, please pay attention to the Resolution III, the confounding risk, because the maximum value seemed to be higher than the last design. It indicates that adding more points doesn't effectively solve the confounding risk.

As for this, it shows another augment design function. It's a Fold-Over point. The function is to design symmetrical points based on the original data with zero as a center. The structure tested by this design is the same as original nine points. JMP cannot give us an evaluation. My guess is that the original design lacked the corresponding trending for X5 and X6, which cannot be compensated through symmetrical design.

The final choice is to use a Space Filling. You can take a look at the points added in the table. Their value are not integrals. If you focus on the multivariate scatterplot matrix, it's found that distribution structure not good-looking, even a little bit disorder. There is no doubt that its D-Efficiency is very terrible. But it's worth mentioning that the confounding risk of the design is slightly lower than the augment points and doesn't exceed our criteria, 0.3. As for more Space Filling points, we can obtain better Factor Power and Prediction Variance, although lower D-Efficiency.

Now, let's summarize all of the design mentioned before. Design 1 is end with first nine points completed, even cannot be evaluated. Besides, it's useless to add Fold-Over points. The alternative point options are augment points and the Space Filling points. By adding augment points, can reach good design structure, but need to pay more attention to confounding risk, which will make noise in further Factor interaction effect.

Another method a study to add Space Filling points. It can reach less confounding risk but has shown bad design structure. I'm not sure if it can build a good model. I think I need to conduct the experiment to verify it. Now we can go back there. Based on previous analysis, evaluation of design diagnostics has been done. Each option has its own characteristics and requires experiments to prove its practicality.

Why do I keep showing the table? Why do I keep showing this scorecard? Because I have always believed that we like a DOE workflow to help us avoid many mistakes. The content presented in the table is quite comprehensive, and I will continue to follow this process in my future work.

From the project study, we get some conclusions. At first, we have created a basic metric to judge DOE evaluation. More serious criteria can avoid the risk during further model analysis. The second point, the most important risk for DSD model is constrained risk. If it's run, it can't be achieved totally and directly will cause evaluated results failed.

The third point, based on DOE evaluation before, adding augment DOE can be a good way to remedy, although it cannot avoid a perfectly confounding effect. Besides, it's useless to add more points. This discussion is limited to the mathematics evaluation JMP. What's more importantly, experiments are needed to verify it.

Therefore, my next plan is to implement the above-mentioned DOE on my customer FAB and use actual results to prove which method is more effective. Now, once again, I would like to express my gratitude to my mentors, Jiaping and Charles. I will give a special thanks to go to US JMP for giving me this opportunity to attend conference. That's all. Thanks a lot.