The semiconductor industry faces significant challenges in extracting and leveraging data to improve yield. One of the Defectivity workshop's missions is to locate and identify physical defects on production wafers caused by the process. These defects can directly impact the product electrical performance, highlighting the need for a better understanding of the correlation between defectivity levels (defect count per wafer) and the process involved. Highly complex, those processes are driven by multiple parameters, which means their relationship to defectivity is a major enabler for process tuning toward yield improvement and cost optimization.

In this case study, we demonstrate the efficiency of JMP as a tool for data management and formatting from process in line collection through the following steps:

  1. Visualizing the initial extracted data to verify a hypothesis about the worst process tool.
  2. Manipulating and extending the data set for a more in-depth analysis of a previously identified process parameter, enabling correlation analysis.
  3. Quantifying dollar gains based on the analysis.

Finally, a clear results display is highly beneficial for management when considering a potential process change and making informed decisions regarding the cost-yield balance.

 

 

Welcome to my presentation of my e-poster, which is called Using JMP for Defectivity to process correlation the case study.

So, moving on to the abstract, we'll go over the different points of visualizing the data, manipulating it, and we'll see how it's allowed us to take decisions about the process changes.

To begin today, we'll talk about Defectivity. Our job is to inspect the wafers in the semiconductor industry. During the process, we will basically count how many defects we have on the wafer at a specific step during the process. So this will be our main response variable in the Y-axis today. We are looking to improve yield with this analysis, and we'll see how mainly the graph builder allowed us to do that.

So first I will show what my data looks like at first sight, and then we will try to make correlation analysis with different parameters and see how we can conclude about which parameter is the most influence on our defect count variable.

Switching now to JMP demonstrates. Here is my data set. You can see here that we have a thousand rows with each row being a wafer within a lot. So each lot has 25 wafers. We have for each wafer a defect count value which is our main variable. We want to see how different columns after that I will go over afterwards are influencing our defect count.

First of all I will demonstrate how the graph builder can help us visualize our data first at first sight. If we go into the graph builder, and we drag and drop the defect-count variable, which is continuous in the center, now I can see that I have outliers. I will put in the X-axis, the lot number and the wafer number at the top. So each data point here represents one row.

Now I want to see how my population is distributed quickly. I will order X-axis by defect count descending. Now we can see that we have a certain part of the population which is higher, and we will try to identify why.

Coming back to the table, we have the first parameter which is the tool that was used during process for a specific step. We have three tools here, so tool A, B and C. This column was added to the data set using the tables join feature here. Using another table that I do not have here, which was coming from Excel, with basically the wafer number and the lot number as well, and the tool associated with. When matching on the two columns, you will be able to join the two tables and add the tool column.

Each wafer has now its own process tool associated. The same was done with the parameter A which is not a categorical variable but a continuous variable. I will go over that later. Coming back to the tools, going to the graph builder again, we want to know how the tool is impacting our defect counts. Defect count in Y again. Now I can put the tool in the X-axis and clicking on the box plus to see how this behaves.

I like to add the tool the X variable as well in the color section to make it a bit more fancy. When zooming in I can see that clearly my tool, see is the best amongst my 3D tools. Now if I want to make sure of that and add some numbers I can Shift Click and add the caption box here and in the location section select the access table. Now I can have the mean of each population displaying.

Now what I want is the number of samples, the mean let's say and the median. Now this confirms that the tool C as population has a mean of the other ones even more.

This is the first element. We know that tool C is the best and tool B is the worst. Coming back to the table, we now want to do the same for parameter A which is a continuous variable that we included in our data set. First of all what we can do is do fit YX analysis. Put the defect count in the Y-axis and the parameter A in the X-axis.

This will do bivariate analysis. Clicking Okay and zooming in, we can see that our two variables have some sort of correlation, but it's not clear. We are able to visually identify like three sections below 300, let's say between 300 and 400 and above 400. I will show you how we can differentiate those three categories and then make analysis on those three relations. We can use the columns here.

Utilities make binning formula feature which allows you to create a new column that are right after the parameter A column, and you can choose value dynamically to separate your populations. What I did is I did the same. I did the same for 300 and 400, and then I did it again with the thinner bins like 75 that I will use later. This is creating me a categorical variable here. Now I will be able to go to analyze bit Y by X and put the defect count again and use the pin column here to do a one way analysis this time.

But what to consider is that I want to isolate the worst tool to see how the defect count is behaving for a specific tool. So I will add a local data filter and filter on tool. Now you can see that I can select which tool I want to analyze for this column. Again, zooming in to see better what's happening and what's interesting to do is compare means students T tests.

Now we can identify that parameter A is having a good influence on the defect counts when it's below 300 for the tool B and for the other tools it's not that significant because the circles here are together, whereas they are very distant here.

To continue, we can also study another parameter that I will show you how was set up. So we know that for a certain tool we have we had a different configuration from a certain date. To create that information in the table, in the data set, sorry, I will go in the graph builder and do it dynamically. So putting the defect count again.

But this time instead of putting just the lot number and the wafer number, I will use the inspection time column which is the time at which the wafer was inspected. This I will use date and time feature because this is a date variable and transform it in a new variable month, year, sorry, which is categorical as well. That I will be able to put it in the graph builder to see the month of inspection.

Now I want to separate the tools, so I want to add local filter on the tools as well. I know that for tool B, since September 2023 we had a new configuration for the parameter B. This is from here. Now what I can do is I can select these points here and those will be selected in the data set.

Here you can see I have 246 rows selected. Now I can use the rows name selection column. That will create me a new column parameter B with the selected values being configuration A or B and selected the other one. You can now do another graph builder, sorry, which is this one, which is a bit fancy.

But you can now see that for 2B when I put the color in the configuration, I have the configuration A for parameter B because I have my column parameter B that was created using the name selection in column feature. This is interesting because we can see that for this tool it seems like we have much better variation since the change.

Now that we studied all of our parameters, I will show you how I did a summary graph that can help, that helps us making a decision. So this is the graph. Basically we combine all the columns in the X-axis using the defect count in the Y-axis with a thinner binning for parameter A, so 0,75 large. What is interesting now is we can see all the effects at the same time.

We can see that tool C is the best tool. Tool B was the worst tool with configuration B. Configuration A allows a good improvement for tool B. We can see as well for each tool that with the parameter A increasing, effect count is increasing as well. Below 300 it seems quite okay.

So, coming back to the slides, it's all summarized here. We went from a basic population, and we tried to isolate parameters and study how they affected our defect count variables. This allowed us to... This allowed us to take a decision about the process parameters and to improve our process and yield. We chose the configuration, the best configuration, the configuration A for parameter B on the worst tool, which is tool B as well. We limited the parameter A to 300 on that tool as well.

That is how we use the graph builder and mainly graph builder and a bit of ANOVA to identify and make sure that we have the best process and improve yield in Defectivity.

Published on ‎12-15-2024 08:23 AM by Community Manager Community Manager | Updated on ‎03-18-2025 01:12 PM

The semiconductor industry faces significant challenges in extracting and leveraging data to improve yield. One of the Defectivity workshop's missions is to locate and identify physical defects on production wafers caused by the process. These defects can directly impact the product electrical performance, highlighting the need for a better understanding of the correlation between defectivity levels (defect count per wafer) and the process involved. Highly complex, those processes are driven by multiple parameters, which means their relationship to defectivity is a major enabler for process tuning toward yield improvement and cost optimization.

In this case study, we demonstrate the efficiency of JMP as a tool for data management and formatting from process in line collection through the following steps:

  1. Visualizing the initial extracted data to verify a hypothesis about the worst process tool.
  2. Manipulating and extending the data set for a more in-depth analysis of a previously identified process parameter, enabling correlation analysis.
  3. Quantifying dollar gains based on the analysis.

Finally, a clear results display is highly beneficial for management when considering a potential process change and making informed decisions regarding the cost-yield balance.

 

 

Welcome to my presentation of my e-poster, which is called Using JMP for Defectivity to process correlation the case study.

So, moving on to the abstract, we'll go over the different points of visualizing the data, manipulating it, and we'll see how it's allowed us to take decisions about the process changes.

To begin today, we'll talk about Defectivity. Our job is to inspect the wafers in the semiconductor industry. During the process, we will basically count how many defects we have on the wafer at a specific step during the process. So this will be our main response variable in the Y-axis today. We are looking to improve yield with this analysis, and we'll see how mainly the graph builder allowed us to do that.

So first I will show what my data looks like at first sight, and then we will try to make correlation analysis with different parameters and see how we can conclude about which parameter is the most influence on our defect count variable.

Switching now to JMP demonstrates. Here is my data set. You can see here that we have a thousand rows with each row being a wafer within a lot. So each lot has 25 wafers. We have for each wafer a defect count value which is our main variable. We want to see how different columns after that I will go over afterwards are influencing our defect count.

First of all I will demonstrate how the graph builder can help us visualize our data first at first sight. If we go into the graph builder, and we drag and drop the defect-count variable, which is continuous in the center, now I can see that I have outliers. I will put in the X-axis, the lot number and the wafer number at the top. So each data point here represents one row.

Now I want to see how my population is distributed quickly. I will order X-axis by defect count descending. Now we can see that we have a certain part of the population which is higher, and we will try to identify why.

Coming back to the table, we have the first parameter which is the tool that was used during process for a specific step. We have three tools here, so tool A, B and C. This column was added to the data set using the tables join feature here. Using another table that I do not have here, which was coming from Excel, with basically the wafer number and the lot number as well, and the tool associated with. When matching on the two columns, you will be able to join the two tables and add the tool column.

Each wafer has now its own process tool associated. The same was done with the parameter A which is not a categorical variable but a continuous variable. I will go over that later. Coming back to the tools, going to the graph builder again, we want to know how the tool is impacting our defect counts. Defect count in Y again. Now I can put the tool in the X-axis and clicking on the box plus to see how this behaves.

I like to add the tool the X variable as well in the color section to make it a bit more fancy. When zooming in I can see that clearly my tool, see is the best amongst my 3D tools. Now if I want to make sure of that and add some numbers I can Shift Click and add the caption box here and in the location section select the access table. Now I can have the mean of each population displaying.

Now what I want is the number of samples, the mean let's say and the median. Now this confirms that the tool C as population has a mean of the other ones even more.

This is the first element. We know that tool C is the best and tool B is the worst. Coming back to the table, we now want to do the same for parameter A which is a continuous variable that we included in our data set. First of all what we can do is do fit YX analysis. Put the defect count in the Y-axis and the parameter A in the X-axis.

This will do bivariate analysis. Clicking Okay and zooming in, we can see that our two variables have some sort of correlation, but it's not clear. We are able to visually identify like three sections below 300, let's say between 300 and 400 and above 400. I will show you how we can differentiate those three categories and then make analysis on those three relations. We can use the columns here.

Utilities make binning formula feature which allows you to create a new column that are right after the parameter A column, and you can choose value dynamically to separate your populations. What I did is I did the same. I did the same for 300 and 400, and then I did it again with the thinner bins like 75 that I will use later. This is creating me a categorical variable here. Now I will be able to go to analyze bit Y by X and put the defect count again and use the pin column here to do a one way analysis this time.

But what to consider is that I want to isolate the worst tool to see how the defect count is behaving for a specific tool. So I will add a local data filter and filter on tool. Now you can see that I can select which tool I want to analyze for this column. Again, zooming in to see better what's happening and what's interesting to do is compare means students T tests.

Now we can identify that parameter A is having a good influence on the defect counts when it's below 300 for the tool B and for the other tools it's not that significant because the circles here are together, whereas they are very distant here.

To continue, we can also study another parameter that I will show you how was set up. So we know that for a certain tool we have we had a different configuration from a certain date. To create that information in the table, in the data set, sorry, I will go in the graph builder and do it dynamically. So putting the defect count again.

But this time instead of putting just the lot number and the wafer number, I will use the inspection time column which is the time at which the wafer was inspected. This I will use date and time feature because this is a date variable and transform it in a new variable month, year, sorry, which is categorical as well. That I will be able to put it in the graph builder to see the month of inspection.

Now I want to separate the tools, so I want to add local filter on the tools as well. I know that for tool B, since September 2023 we had a new configuration for the parameter B. This is from here. Now what I can do is I can select these points here and those will be selected in the data set.

Here you can see I have 246 rows selected. Now I can use the rows name selection column. That will create me a new column parameter B with the selected values being configuration A or B and selected the other one. You can now do another graph builder, sorry, which is this one, which is a bit fancy.

But you can now see that for 2B when I put the color in the configuration, I have the configuration A for parameter B because I have my column parameter B that was created using the name selection in column feature. This is interesting because we can see that for this tool it seems like we have much better variation since the change.

Now that we studied all of our parameters, I will show you how I did a summary graph that can help, that helps us making a decision. So this is the graph. Basically we combine all the columns in the X-axis using the defect count in the Y-axis with a thinner binning for parameter A, so 0,75 large. What is interesting now is we can see all the effects at the same time.

We can see that tool C is the best tool. Tool B was the worst tool with configuration B. Configuration A allows a good improvement for tool B. We can see as well for each tool that with the parameter A increasing, effect count is increasing as well. Below 300 it seems quite okay.

So, coming back to the slides, it's all summarized here. We went from a basic population, and we tried to isolate parameters and study how they affected our defect count variables. This allowed us to... This allowed us to take a decision about the process parameters and to improve our process and yield. We chose the configuration, the best configuration, the configuration A for parameter B on the worst tool, which is tool B as well. We limited the parameter A to 300 on that tool as well.

That is how we use the graph builder and mainly graph builder and a bit of ANOVA to identify and make sure that we have the best process and improve yield in Defectivity.



Event has ended
You can no longer attend this event.

Start:
Thu, Mar 13, 2025 06:50 AM EDT
End:
Thu, Mar 13, 2025 07:30 AM EDT
Ballroom Gallery- Ped 2
0 Kudos