Choose Language Hide Translation Bar

Beyond Trial and Error Case Studies, AMAT Creates Value with DOE Workflow

Great software, when wielded by skilled hands, has the power to transform the world. Derived from a leading semiconductor manufacturing company, the tangible value created through Six Sigma is undeniable. With a diverse array of real-world cases, spanning JMP functionalities such as Design of Experiments (DOE), multivariate analysis, quality control, process optimization, and consumer research, this collection shatters conventional thinking, offering invaluable insights for various industries to emulate. It's not merely a showcase of DOE prowess; rather, it epitomizes JMP's role as the epitome of 'Data-Driven Decision Making' solutions. The presentation is cohesive, seamlessly connecting different cases through a workflow concept, serving as a crucial catalyst for successful Six Sigma projects.

 

 

Hello, everyone. My name is Jiayi Yang, a Process Support Engineer from Applied Materials. The topic that I want to share today is, Beyond Trial and Error: Multiple Case Studies on How Applied Materials Creates Value with DOE Workflow.

First of all, let me introduce the company. I show here, its name is Applied Materials. It's the world's number 1 semiconductor and display equipment company. We provide almost all kinds of semiconductor process tools except the lithography machine. You can imagine us as the supermarket, the Walmart in the semiconductor field, and our goal is to make possible a better future.

Owning so massive groups of customers, we get abundance of requests every day. For example, if customers want to update their products, they may come to us for help. In order to show our customers our expertise and to respond to their request effectively and quickly, we need to conduct DOE projects to find out the potential relationship between the variables and the final results. But the question is, how can we be sure that the final solution that we give to our customers is the best one? That's the topic that I want to share today. I will show that inside Applied Materials, how we control every step in the DOE workflow to make sure that the final predicting model is accurate.

This report was greatly supported by three DOE experts. They are Jiaping Shen and Charles Chen from Applied Materials, the same company as me, and Pin Hu from JMP. Meanwhile, I'd like to thank all process engineers that have involved in this project for bringing back the practical experience from the customer side. Their experience are so helpful enough to help us to build this workflow more feasible and more suitable for our engineers.

There are a lot of analysis tools on the JMP platform, and we will focus on the DOE on this topic. Let's move on to the topic itself. Inside Applied Materials, the DOE workflow is classified into eight steps. Bounded by the data collection step, the workflow can also be divided into three parts. On the left side of the data collection step is to design your experiment. On the right side of the data collection step is to establish your predictive model.

In order to guide our engineers to do their projects more successfully and also in order to improve the model efficacy, there are a lot of hints and questions within each step. These questions act like instructions or warnings to tell our engineers which criteria they should follow within each step, and if they don't, which risk they will encounter at the end of the project.

Of course, each project is unique, and these eight steps only cover a wide range of scenarios, but no one size can fit all models. We are quite sure that if every engineer can follow these eight steps and fulfill these criteria within each step, even a junior engineer can implement the DOE project by themselves and can finally obtain a good model.

Next, I will explain each step one by one. The first step is to define your DOE scope and objective, which means we need to figure out the potential relationship between the Y and the X. In order not to neglect any powerful correlations, we need to find out all the potential variables.

In order to do that, we recommend using the Six Sigma methods. For example, the Six Sigma cycle or design for Six Sigma parameter diagram and et cetera.

These Sigma methods can help us to find all the potential variables that may have potential influence to the final results. For example, the different batches of incoming wafers from our suppliers may have determinant inference to our final results. That's the factors that we may ignore usually because we trust our suppliers, but sometimes they will make mistakes. Some other factors, like the noise factors, may also be the main factors to the final result.

Through the Six Sigma methods, we can find out all the Y and the potential variables from the very high level. But the next question is, how can we implement it in the executable detail level? To do this, we can use the multivariate analysis to explore the multidimensional behavior of the variables, and we can further explore how multiple variables are related to each other.

The next step is to downsize the variables. To do that, we can use the clustering variable analysis to help us to pick out the most representable and independent variables. In this step, recombining the Six Sigma and the JMP platform, we can finally pick out the Y and the powerful X.

If we want to further quantify this relationship, we need to build a DOE model. To improve the model adequacy, we need to care about the DOE structure. But here's the common misconception, is the DOE that we've been talking about the real one? Well, if we only change one factor at a time, then we are doing the EOD, the experiment of design, not the actual DOE, because the EOD design is more suitable to explore the main effects.

If we want to detect interaction and other the high-level effects, we need to conduct the actual DOE, and it's rather be called as design an orthogonal experiment. In this design, two or more factors can be changed at a time.

Meanwhile, we need to take care about the space range because it's critical to explore the effects. For example, if the range is wider, then it's more beneficial to explore the main effects, but the power will be diluted. If we narrow the range, then it's more useful to explore the interaction effect.

Sometimes, we need to handle the design constraints because of practical or objective reasons. For example, the equipment cannot tolerate both high temperature and ultra-high pressure at the same for the time. If we just ignore these design constraints, then the missing value may kill the orthogonal design structure, and then we'll finally lead the DOE model into a failure. To handle the design constraint, we can search the closest neighbor point within the design constraint to still maintain the near orthogonal structure or just to redesign your entire experiment.

Let's take a look at one Past DS China Paper from AMAT that was once published in 2019 by Christy Pan. We can see from this image, three conditions in the right corner were missing because of the equipment capability constraint. If we just ignore these three conditions, then the DOE numbers will cut from 17 to 14.

Then the worst part is the DOE orthogonal structure would be ruined. In order to handle this constraint, we need to shrink the space range. But the question is, which parameter should we select it to reduce the space? Through the process understanding, Christy Pin finally find two possible compromise process setting. One is to change the space range of Parameter B, and one is to change the design space of Parameter C. But which one should she finally select?

Through repeatedly diagnosing and evaluating by comparing the power, the confounding, the prediction uniformity, the design space, and the prediction variance, Christy Pin finally chose the second one. Seeing from this image, we can see that Christy Pin really did great efforts to handle these design constraints.

This shows respectful craftsman spirit inside Applied Materials. You may wonder that, is there any easier solution for us to handle these design constraints at the very first beginning? Well, lucky for you. The JMP platform offers a better choice, and that's the custom design. This design can accommodate various types of factors, constraints, and disallowed combinations, and you can also support the hard change and very hard change factors.

We can specify which effects are necessary and which ones are desirable. Then the custom design will offer us a set of DOE plans that can accommodate these constraints. It's a much, much easier way for us to handle the DOE plans that have the factor constraints.

After making the DOE plans, we now come to our customers or our boss to tell them our experiments, and 90% of the chance we will be questioned and finally rejected because they care about the experiment cost. They will ask you, "Do you really need to do so many experiments?" For example, if we need to find out the relationship between the nine variables, then we need to run at least 21 experiments. Considering that each wafer may cost up to thousands of dollars, the experiment cost is quite high, right? How can we handle these problems?

Still lucky for you, the JMP platform also offers a better choice and that's the Group Orthogonal Super-saturated design, and it's a special method that can help us to pick out the most significant variables from abundant variables.

For example, it can help us to determine the seven variables only out of six runs. Incredible, right? Now, let's take a look at one practical case that using this GO SSD method. In this case, one-step process was divided into two steps by the GO SSD, which means we can test two conditions at one time on a single wafer. This will definitely reduce the cost because it only takes six runs for seven variables. Most importantly, the R-square was largely boosted from 27%-92%, and this GO SSD method was successfully verified in the customer side. Still from this success, we can know that the GO SSD design is a cost-friendly method that can help us pick up the powerful variables at the very first beginning.

It's also worth mentioning that the GO SSD method only consider the main effects. If we want to further maintain the DOE orthogonal structure, we still need to do the augment DOE furthermore.

Next, when we're making our DOE plan and customers finally agree to do all the experiments, we now come to the data collection step. You may think this step is quite easy because it only takes to do the experiment and then collect the data. But if you think so, you'd be full of wrong because this is the trickiest part. Any unexpected noise will ruin the final model adequacy. The measurement capability is the key one.

Now, let me show you one bloody lesson that we've learned from the poor Gauge R&R capability. In this case, we were helping our customers to build their model, and we were so confused to see that the final model was quite weird, and it did take us a lot of efforts to do the troubleshooting. Finally, we urge our customers to reevaluate their measurement capability. Surprisingly, you can see from this image, nine samples were tested twice, and the repeatability was a disaster because the wafers were oxidized from long time waiting for the measurement.

Once finding the root cause, we suggest our customers to control the kill time and/or place the wafers in the nitrogen cage in case of oxidation. After this improvement, the final model is adequate. Learning from this lesson, we urge our engineers to reevaluate the measurement capability ahead of the data collection step. Sometimes, the methodology owner will insist that their methodology is good enough, but we cannot trust them because most of the time, the methodology owner will only test the repeatability within the methodology tour itself, and ignores the reproducibility, which means they may not consider the differences among the operator skills or other issues that have brought along with the kill time or any other stuff.

Meanwhile, the PT ratio is one parameter to determine the measurement capability, and the PT ratio is a function of tolerance. Once the process is tightened, the measurement capability should be improved simultaneously. Otherwise, the methodology capability can ruin the final results.

In addition, the Gauge R&R only evaluate the random error. But how about other factors like the system error or the stability over time or etcetera factors.

To further do the evaluation, more properly, we can use the JMP measurement system analysis platform to help us to make the evaluation plan. Finally, after the data collection step, we can finally build our model.

To find the best model and to check the model adequacy, we can make use of the Abundant Statistics or Powerful Visualization Tool on the JMP platform. For example, the statistic values here will tell us if your model fitting is great enough or is there any concerns that you need to take care of? For example, is there any culinarity concern or lack of fear or overfit concern? Meanwhile, the interaction profiles can also tell us the interaction effect between the variables. It can also tell us something that we may not figure out before. For example, in this case, if we didn't do the DOE project, we would not see that the effect of gas ratio to the density will be greatly impacted by the power. Through combining these values and judging from our engineering sense, we can pick a battle model. I'd like to mention Imagine that the JMP 18 has new features that can help us to figure out the interaction effects more.

Finally, I also want to emphasize that project success requires the right time, right place, and the right people. In most of the case, the human factor is the main factor that may lead the project into a failure.

We need to figure out the threat factors at the very first beginning. For example, we need to make sure that our boss and colleagues will offer us the necessary resources to help us to implement the DOE project from the very first beginning to the end.

We also need to make sure that our customer will not suddenly call an end to the project. To evaluate the consensus among our team members, we innovatively use the choice design to explore the consensus. The choice design is mostly used for the consumer study and is similar to the Gauge R&R reproducibility, and can tell us if our agreement is acceptable or not.

For example, in this case, in the foaming and stomping phrases, we conducted the choice design to evaluate the consensus among our team members, and we can see that on some items, the agreement were as low as 46%. Through further explanation among the team members or through team building or other activities, we finally boosted our agreement from 46%-90%. Through realigning the team priority, can demand things in the one place, and finally the energy go in one direction.

This is an example that we innovatively apply the choice design into the DOE project. Of course, there are many other useful tools on the JMP platform that are waiting for us to explore and to apply them to the DOE project flexibly and innovatively. Those methods will help us to implement the project more successfully.

As the JMP co-founder, John Sall, once said, "The great software in the right hands can change the world." We are sure that if each engineer can follow these eight steps and can fulfill all the criteria within each step, no matter you are a senior one or a junior one, you can still do the theory project by yourself, and can finally get a more accurate model. After hearing what I just said, have you wondered that if these eight steps are so powerful enough? Well, to explore the secret, why don't you just give it a short and experience yourself, and then you will see the magic of this workflow. That's what I want to share today, and thank you for listening.