Choose Language Hide Translation Bar

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
新入社員向け研修の中で、評価が最も低かった「SQC入門」に対して、「粘土製造演習」という模擬モノづくりを通じて、座学で学んだ内容を実践することで、多くのことを習得できるように設計された事例を紹介します。 1. モノづくり(QMS的観点、プロセス) 2. モノづくりに必要な組織と役割(人) 3. モノづくりの要素(5ME) 4. モノづくりの指標(KPI) 5. ばらつきと可視化(SQC) 6. トヨタの仕事の仕方(問題解決8Step思考) 7. チームワーク 8. 発表 受講生の声 - 「実務に近いモノづくりの流れを体験でき、本業務で自分が果たすべき役割の理解につながりました。」 - 「全員で役割を決め、それぞれが検討しながら最終的な製品を完成させ、問題点を洗い出すことができ、実際の業務でもSQCを活用できる有意義な講義だと感じました。」 - 「モノづくりの流れを原価などの費用面で考えるのは興味深かったです。」 - 「同期と議論や相談、協力しながら作業でき、非常に理解が深まりました。」 - 「各グループの発表に対するフィードバックを聞くことで客観的な目線で学べました。」 また、この演習を聴講した先生が小学生向けにカスタマイズした例もあり「モノづくりはヒトづくり」の参考になる内容です。
鹿児島県茶市場では,荒茶品質改善を目的に,入札される荒茶の外観と水色がデジタルカメラで撮影され,その画像のテクスチャー解析や色度解析から得られる数値は,単価や画像とともにスマートフォン等で各農家にフィードバックされている。今回,お茶の味や香りに大きく影響する荒茶成分値を,栽培情報と画像解析データから説明・予測する手法をJMPの各種機能を用いて検討した。 県内各産地から茶市場に入荷・画像解析・落札後,近赤外法により成分分析された一番茶1,292サンプルのデータセットを用いた。一番茶荒茶成分値(全窒素,遊離アミノ酸,テアニン,繊維,タンニン,カフェイン,ビタミンC)を用いた主成分分析より,入札される荒茶の特徴は,全窒素や繊維に関する指標の主成分1と,タンニンやカフェイン量に関する指標の主成分2で74.4%説明でき,単価は全窒素と正の,繊維と負の相関があった。 管理図より,全窒素と繊維は操業後半(中生~晩生品種)で管理限界を逸脱する事例が多く,全窒素の仕様下限値を含有量5%,繊維の仕様上限値を含有量22%とした場合の不適合率は,それぞれ4.9%,6.4%であった。 荒茶画像解析データから全窒素と繊維を予測するため,栽培情報と10の画像解析項目をパラメータとしたPLS回帰や応答曲面モデルのあてはめを行った結果,両成分値は,入札日と品種に加えて,画像解析項目の「白茎」(摘採する新芽の熟度の指標)を組み込んだモデルで説明できた。さらにプロファイルのデザインスペース機能により,「中~晩生の主要品種は,5月7日までに白茎を程度3以下になるように摘採・製造」すると全窒素と繊維の仕様内割合を98.9%にできると予測された。 以上,荒茶成分値は栽培情報および画像解析データから予測でき,仕様内に維持する栽培指標が得られた。これらの情報を現場での摘採・製造における指導に活用している事例を紹介する。
Thursday, March 7, 2024
Ballroom Ped 4
Companies in the pharmaceutical industry must demonstrate shell life by measuring product performance over time at storage temperatures. To accelerate the test, it is often also done at elevated temperatures. Although Arrhenius demonstrated in 1889 how to combine results at different temperatures in to one model, many companies still analyze each temperature separately. It is not cost-efficient to stratify data into different models. In addition, ongoing verification of shelf life must be performed, where it is often enforced that all individual observations are inside specifications. Due to measurement noise, this criterion is often not met. Instead, it should be enforced that measurements are inside prediction intervals from the initial shelf life study, which is a weaker requirement. JMP has the Arrhenius Equation in the Degradation/Non-linear Path/Constant Rate platform. However, this platform lacks some of the excellent features of the Fit Model platform, such as studentized residuals plot, Box-Cox transformation, random factors, and prediction intervals. This presentation demonstrates how the Arrhenius Equation can be entered into the Fit Least Squares Platform by making a Taylor expansion with only four terms, as well as how a JMP workflow can ease the calculations.   Thank you for giving me this opportunity to talk about how we, as JMP partners, help our clients reducing costs and especially avoiding non-conformities during smart shelf-life calculations and verification of shelf-life. First, I will describe some of the issues we have seen at our clients with shelf-life studies, and especially the enforcement in ongoing verification. Looking at the shelf-life studies first, where you set the shelf-life, we can see that it's based on a number of batches, typically 3-4, where you measure the change over time. Of course, the end level depends on the starting point and the slope you see. Hopefully, the slope is the same for all batches. But we often see that batches start at a different level. This can be a challenge because future batches, when you do ongoing verification, they might start at a different level. If they start lower, you might have an issue that you are out of spec when you read shelf-life. What we recommend to do there is actually convert the shelf-life to a requirement at batch release, meaning what should the value be at time 0 to ensure you will be inside specification at shelf-life. I'll come back to that, of course. We also very often see that linear regression is done on an absolute scale, but degradation is relative. If degradation is large, you're actually not describing the degradation rate. The solution is simple. Just take Ln to your data before you make the regression. I'm sorry, Pierre, to interrupt. I can see a button saying, Zoom is sharing your screen. I think it's gone now. Is there a way you can hide your taskbar? Because it's quite large. It's like a double taskbar at the bottom. Okay, let's try something else. I think maybe if I do like this, is this better? Much better, yes. Because now I'm showing in presentation mode, so maybe I should show it in presentation mode instead. Yes, that looks good. I will just try before we go on just to see, because then when I go to share a JMP, I will do it like this. Then I'm afraid you will see the taskbar when I'm working on JMP. That's fine, yeah, I guess. Should we do it that way? Let's do it like that. When I do a PowerPoint, I will do it in presentation mode. I think that works better. Absolutely. Perfect. Thank you. I guess we just start all over. Start over again. Sorry about that. It's about two minutes, so not a big disaster. Thank you for giving me this opportunity to talk about how we, as JMP partners at NNE, help our clients making better shelf-life studies and especially minimize the number of non-conformities during verification. First, I will describe a little bit the issues we see with shelf-life studies and especially the ongoing verification. We are going to enforce it's still valid, the shelf-life you stated. This is done at regular intervals testing some batches. If you look at the first study that is done, that is the shelf-life estimation, where you set your shelf-life, and they would typically take a set of batches, 3-4, and have them to decay over time. You measure these four batches over time and see how much they decay, and then you can calculate how long a shelf-life you had. But the level you have over time, of course, depends on the slope, which is the main purpose, the decay. In a shelf-life study, but it also, of course, depends on where you start. Since you only have three and four batches in your shelf-life estimation study, and you're going to verify it on other batches going forward, if they start lower than the batches you had in your shelf-life study, you might have a problem in the verification, even though it doesn't decay more just because it starts lower. Our solution to that is actually to convert the shelf-life equation to a requirement time saver start value, simply a release limit, which, of course, then should be better than what you require at shelf-life, so there's a room for a change. Then you're not sensitive to future batches because you will ensure they start high enough, so to say. We also often see that people are doing the regression over time on an absolute scale, but degradation is relative. We strongly recommend to take Ln to your data before you make the regression. We also see many companies having big issues with measurement reproducibility, and shelf-life is the most difficult measurement situation you have because for obvious reason, you have to measure the batches at very different time points, typically years between. Of course, everything changes, and if you don't have a very stable measurement system, you get a lot of noise on your regression curve. You can actually reduce that by entering your time point as a random factor in the model. We also very often see companies doing shelf-life at many different temperatures, which is a good idea because then you can accelerate the test. But for some reason, typically, temperatures are modeled on their own. It's very rare that we actually see people modeling it across temperatures. We get one model describing all temperatures. Of course, we strongly recommend that we model across temperatures because then you have more degrees to feed them to estimate your residuals. In the same area, we also see that when people are going to model temperatures, each temperature on its own, of course, you need time 0 measurements at all the different temperatures. But then it's actually the same measurement you're using because at time 0, it's just the start value. Then you have to be careful when you go into being the model across temperatures because then you shouldn't have the same observation there four times, for example, if you have false temperatures. It's very important modeling across temperatures that you only have a single registration at one temperature. Then it doesn't really matter which one because it's at time 0. This is the issues we see when people are setting shelf-life. This is a little bit the solutions we recommend to it. Then when you're doing the shelf-life verification, meaning now you have stated that you have a certain shelf-life, then you have regular intervals to prove this shelf-life is still valid, and there is an ICH guidance for that. That's also included in JMP. You'll see that in a minute. But what you do there, you make these typical three to four batches, and then you take the confidence limit on the slope for the worst batch and having that to state what is your shelf-life. But this is a little bit challenging because then you are assuming that you have seen the first batch among the first three, which is typically not the case. This is actually the reason why we do not recommend to use the ICH method because it often leads to problem in verification. We also often see that people get a too pass optimistic estimation of the slope standard error because they are assuming that it's independent observations, and then number of degrees of freedom is just n-1. However, very often many measurements are done in the same analytical one. If you have analytical one to one differences, it is not independent measurements, and you need to correct for that by making the effective degrees of freedom. Just by putting in the measurement date as a random factor, you won't get that. Then you will typically get a little bit bigger standard error, which might be seen as a problem. But actually what happens is that it increases the requirement for start value, and thereby you are minimizing the risk that you fail in verification. Last but definitely not least, we still see many companies saying that at verification, all measurements should be inside specification. But if you do have a measurement issue, and always you do have a measurement issue, then you can actually be outside specification at shelf-life due to measurement issues. What we recommend there is actually build a proper model with the right degrees of freedom. For this model, actually on the same data as you used to set the shelf-life, and based on this, make a prediction interval where you can expect future observations to be. This interval will typically be slightly wider than your specification interval and thereby minimizing the risk of failing. I will now shortly describe the formulas and the platforms and where to find them in JMP, but I will do this quite rapidly because I'm sure you can get access to the formulas in the presentation material afterwards, and I think it's more interesting to get into demonstrating it in JMP. But let's first start with this release limit thing. How do you convert your slope and your standard on slope into what should the release limit be at start? Let's say that we have something that drops over time, and we have a Lower Specification Limit. Then you simply, and we're taking that from the VSO guidance for stability, evaluation, and max science. They simply just take the Lower Specification Limit, then they subtract the estimated slope, time to shelf-life, so how much does it drop on the shelf-life. Then you also need to add some uncertainty. It's coming from this standard error on the slope, and there's also a measurement standard deviation on your starting value. We're actually just using this formula besides that we are converting the normal quantile to a t-quantile because it's an unknown standard deviation. As an estimate of the measurement repeatability, we're actually using the IMC from the model. But besides that, it's exactly as described in the VSO guidance. Of course, when you build your model, you'll get the slope from your modeling JMP, you will get your effective degrees from the modeling JMP based on the equation down here, you will get the standard error on the slope, and you will get your residual error. Based on that, you can just feed this into this formula, and you do have your Lower Release Limit. If it turns out that one of the bottlenecks is actually the measurement noise at the start value, you can just make several measurements at start and take the average of these, and thereby you can suppress this by the square root of N. But of course, these measurements has to be taken in difficult analytical-wise. Next, when we're going to model the weight versus time, you can just go to Fit model. Of course, there you can make your regression. There we strongly recommend that you take a lock to your result, because then with a constant rate reaction, that will be linear proportional to the time. Of course, if you have small decays, then you can also just make it without the lock. But why bother about if the case is small or large? Just take a lock and it works no matter the size of the decay. If you want to model across temperatures, you have to use the Arrhenius equation, where you can actually describe decays at different temperatures using an activation energy. If you go to the degradation platform with a nonlinear path, this can nicely be described in JMP, and there you can actually build a model across all temperatures, that's actually pretty easy to do if you find this platform. It's the right, I assume, but you better find it, and I will demonstrate it in a minute in JMP. Based on that, you will get your model coefficient. You have an intercept, you get a slope, you get an activation energy, and they even come with standard errors and some covariance. If you put these numbers together, you can actually calculate the slope at each temperature, and you can calculate the standard error on slope at each temperature. Then you can feed this into the Lower Release Limit, and you will know what that should be. All these model parameters, and the standard errors, and the correlation, you simply get from JMP. But you have to put it into this equation I've shown up here. It's actually not as straightforward as it might look. For that reason, we actually recommend to make a Taylor expansion of the Arrhenius equation, because if you make a Taylor expansion, you can fit it a bit of polynomial. Normally, we see that up to third order is sufficient, and that requires four different temperatures. The great thing about going into Fit model where you can do it, when you have done the Taylor expansion is there, you can put in random factors. As I mentioned previously, you actually need to put in measurement time as a random factor. But often you also would like to have the batch as a random factor because we would like to predict what's going on in future batches, not just those you use for the study. Of course, you also get better model diagnostic too. If you scale your Arrhenius temperature properly, so it 0 at the time of interest. All these terms here goes out because they are 0. Then it's very easy to get the slope and the standard error because that's just the coefficient and standard error in front of the time parameter in your model. It's much easier than what I showed on the previous slide. Then you can also put in on top of the third order data expansion, you can also put in batch multiplied by time, interaction between batch and time, to see if there should be a batch dependent slope. Hopefully, that's not the case, but it can be the case, or even worse, we also put in a Arrhenius temperature time, times batch to see if the activation energy should be batch dependent. That's rarely happened, and of course, shouldn't happen. But I think it's nice to check before we make the assumptions. Now, let's get into JMP to see how does this work in JMP. I will now shift to JMP. Here I have a case where I have studied some batches at different temperatures and at different times. Let's first look at the result. Here you see the result, of course, with an Ln on. It's supposed to be linear at three different temperatures, 15, 25, 30, and 40, from 0-36 months. As you can see, as expected, the higher the temperature, the higher the slope. This can easily be described by the Arrhenius equation. Each batch is measured in duplicate at each time point. You can also see the typical case that we have some measurement variation from day to day. For example, you can see all the measurements made at three months, they are above the regression equation, indicating at that day we measured too high, and for example, at month nine, they're typically too low. Clearly, this curve is contaminated with... Or these numbers are contaminated with your measurement noise. This is quite typical because it's measured at very different time points, and they can easily be that you measure higher on some days than others. You will see what influence that makes in a minute. Let's start doing some modeling. You can actually go into the degradation analysis in JMP. There, you cannot combine the temperatures, but you can do it by temperature. This is actually following this ICH guidance. I just opened it here for 15 degrees. It works the way that you, with a significance level of 0.25, look for, can you assume common slope and common intercept. If the P-value is below 25, you have to use separate intercept and separate slope. You can, for example, see here for the 15 degrees, following the ICH guidance with this significance criteria, you can assume common slope, but you will have different intercepts. Then you actually with these different intercept common slope, you are making a common interval on each regression line, and then you take the word batch, which in this case is batch B. Then you are saying where this catches the spec limit, the lower spec limit, Ln scale, this is your shelf-life, in this case, 55 months. It's very easy to do, but there are some problems with this method. The first thing is that this significance level of 0.25, of course, give a high risk of getting false significance on that you need to have different slopes, but it might not be needed. Even worse, you're just looking at the worst of the first three batches. I mean, it's not very probable that the worst batch you're ever going to make is among the first three. This can really give you some serious issues later on the ongoing verification. Even though it's easy, it's not what we recommend to do. Of course, you can make exactly the same models in Fit model, that's what I'm showing here. Again, first by temperature, and later we will combine it. There you can see you can put in time, batch and time times batch, in this case for 15 degrees. Time times batch will be the term that can take into consideration that slope might be batch dependent, so cannot want common slope. But you can see it has a high P-value, but I don't like to use P-values because they are so sensitive to sample size and signal to noise ratio and so on. I prefer to use information criteria, which are more robust towards across different sample sizes and noise levels and so on. I prefer the Akaike information criteria, which is minus log likelihood, so the lower the better. If I take time times batch out, it's 188-... It actually drops to -193, meaning it's a better model. Now I have justified that I can have common slope. Then I can go to batch number. It has a more borderline P-value, still in the high end, but I'm not using the P-value, I'm using the information criteria. If I take it out, it drops from 100 -3 to -195. Still dropping. In this way, I have also justified, I can use common intercept. However, as I mentioned before, be careful here because these numbers are not independent. I'm not telling JMP that for now. You can do that better by going to Fit model and do the same model. The only difference, now I'm putting in measurement day as a random factor. Now I can correct for that these measurements are grouped, that numbers on the same day comes from the same analytical one. Now you can see that you get different P-values. Again, looking at the information criteria, you can see batch number times time -207, It drops to -239. Better model, justified common slope. But see now what happens when I take out my batch number. It's the same as before, just adding measurement day. It's -239. Now it actually increases to -236. I shouldn't take that one out, meaning I cannot assume common intercept. It's a good example why you need to put in measurement day, because otherwise you could be fooled by numbers are not independent. If you want to model across temperatures, as I mentioned previously, it's fairly easy to do. Just go to the degradation data analysis and put in this nonlinear part. Actually, you have the Arrhenius equation built in, and you will get across the four temperatures a common intercept, a common slope, and a common activation energy. However, I've just shown that, yeah, the common slope is fine, but the common intercept is questionable. We need actually separate intercept common slope, and you cannot really do this here. What you can do is that you can go in and say, I want separate parameters for all batches. But then you get separate intercepts, you get separate slopes, and in this case, you get common activation energy, which I think makes sense. But you cannot really here have the combination of common slope and separate intercepts. But there's a solution in JMP. If you go to the nonlinear platform, you can build your own fitting equation. There you can actually, with the Arrhenius, have, as you can see here, a common activation energy, a common slope, but separate intercept. So it can be done. But the challenge here is that you cannot put in random factors. You're having a hard time correcting for, it is not independent measurements. You cannot put batch either in as a random factor. You have a hard time making a model describing batches in general, which is typically what you need. For that, we actually like to go to Fit model and put in this Taylor expansion of the Arrhenius equation, as you're seeing here. To first, second, and third order, of course, putting in the batch number for different intercept, put in batch number times time to be able to handle that you might have a different slope, hopefully not. Even worse, you can also put in Arrhenius time times batch number to correct for you might even have batch dependent activation energy, which could be strange. But looking at the AICc, you will not take this time out first, and you see it -730. How lucky can you be? It drops to -67 as it falls. It's a better model. I have justified now that I can have a common activation energy. The same batch number times time, it has a borderline P-value. But again, looking at the Akaike, you can actually see it's still dropping. I can actually justify now that I also can have a common slope, which is, of course, a great thing. This you can also do in this model. As you can see down here, I have actually put in the Measurement Day as a random effect because this you can easily do in Fit model. You cannot do that in the degradation platform. Hopefully, you have seen here that there are many different ways of calculating the slopes. I've tried here to see what difference does it make for your Lower Release Limit. If you're running this one here, this small script, you can actually see here there are many different... I could do it by temperature, by temperature with random time. This degradation platform would come in everything, individual everything, the nonlinear and so on, the Taylor without random time and Taylor with random time. Over here, I just type in the slopes and standard or slopes we get from these models. Here you can actually see what is the Lower Release Limit if you only make one measurement batch release. You can actually see the method we recommend, which is Taylor expansion with random time, gives one of the highest release limits. Of course, when you have a higher release limit, there's a lower risk that you will later on have issues in ongoing verification. There we would require that all batches should start about 10.09. Otherwise, we cannot be sure they still work at shelf-life. You can see if we take the random time and do it on 15 degrees alone, this is where we have the requirement is a 15 degrees not combining, you get an even higher release limit. But that's because you get fewer degrees of freedom by doing a separate model. We can actually get a Lower Release Limit, which you can still rely on by building a model across temperatures. As you can see to the right, if you take two measurements at start, batch release to suppress measurement noise, you can reduce it further. If you take 10, you can go even further down. How many measurements you would like to do in start? Batch release, of course, depend on your measurement noise and it's distributed in the bottleneck. But it really makes a difference which method you're using. If you're not correcting for it's not independent measurements, then you can easily get a too Low Release Limit, which will then give you issues later on in ongoing verification. If you want to describe batches in general, for example, to setting the shelf-life, then you also need to add batches as a random factor. Now I'm putting both batch and Measurement Day as random factors. Now I'm not only describing the three batches I used to make my study, I'm describing batches in general. Then you can just go down to the Profiler. I put it at Arrhenius temperature 40.272 that corresponds to 15 degrees Celsius. You can see here's the state of shelf-life. They would like to prove that they have 24 months. If you look at the general line for all batches with confidence, it nicely stays inside its limit. This company has no problem at all proving that they have a shelf-life of at least 24 months at 15 degrees. You can see I'm running 5% one-sided Alpha because it's only a problem to be to one side. However, if I want to predict where could individual measurements be, because this is only showing where the two line is, then I can actually do the same model again I can exact the same model. I just changed my Alpha to 0.135% one-sided, like what's inside +-3 Sigma to describe everything. Then down here, you still see the same. Of course, it gets a little bit wider by taking my Alpha. Then I would like to show the predictive limits down here. Unfortunately, you cannot show predictive limits on a profiler in version 17. I'll just shortly go to the 18 early adapter. It's not released yet, but it will come. Running the same model in there, because the great thing in version 18 is that you can see both confidence limits on a profiler, that's the dark way. But you can also see prediction limits or individual confidence limits. This is where you can expect individual observations to be with this shelf-life you have of these batches. You can see you can expect to have values slightly below lower specification. For ongoing verification, we recommend actually to set the requirement that there should be inside the prediction interval, which is slightly wider than the specification limit. This way you can also avoid non-conformities due to measurement issues. Hopefully you have seen that there are a lot of pitfalls in doing shelf-life and verification, but JMP has a good toolbox to work around these pitfalls and actually to do it right. To conclude, I will just go back to my presentation and go to issues I started with, just show them again, because now we have been through the matters. When we do shelf-life at clients, we strongly recommend to convert it to a release limit because then you are sure that all batches you're going to make in the future will live up to the shelf-life, because you are putting on a requirement where they start. Of course, do all models on Ln data. It's just put lock on in the model dialog window, and you get the regression that is supposed to be linear. End the time as a random factor in the model. Then you can correct for that you're probably not having the same measurement level at all days. Of course, build a model across temperatures with this Taylor expansion of the Arrhenius equation that's easy to do. Then remember, of course, not to have multiple registrations. Time 0 point should only be entered at one temperature. Then, when you're going to the ongoing shelf-life verification, we do not recommend the ICH method because it actually assumes that you see the worst among the first three, which is probably not the case. It's also very important when you calculate the release limit that you get the right standard of your slope, you get the right degrees of freedom. When you do not have an independent measurement, it's not just n-1. You really need to put in typically the analytical one as a random factor. Then last, but definitely not least, please put a specification on your ongoing verification. They should conform with the prediction interval you made on the batches used to set the shelf-life. Thank you for your attention. Hopefully, you got inspired how you can do shelf-life in a very good way using JMP. Thank you very much.
Labels (2)
Thursday, March 7, 2024
Ballroom Ped 3
SiO 2 thin film has been widely used as STI liner, gate oxide, spacer, etc., in the semiconductor industry. The thickness of SiO 2 layers is strictly controlled and is affected by facilities, chambers, and measurements . Among these factors, thickness is directly susceptible to measurements. If measurement queue time is too long, true thickness of  the SiO 2  layer formed from thermal process may be distorted as thickness may increase naturally in the atmosphere. To analyse effects from queue time and measurements on SiO 2 thickness, JMP GRR analysis was introduced. After defining the operation, a cause-and-effect diagram is used summarize possible factors for thickness shifts. Next, thickness from coupons is collected, based on JMP MSA design platform. The thickness of each coupon is measured multiple times as repeatability tests and degradation tests, with the same repeatability tests conducted every three hours as reproducibility tests. Once the variability in thickness from repeatability and reproducibility is analysed using Xbar and S charts, GRR analysis is performed to evaluate current GRR performance. Finally, relationships between P/T ratios, alpha/beta risks, and spec tolerance, regression models between thickness and queue time are built to determine if the measured thickness is to be trusted.   Hi, everyone. I am Jiaping Shen. I am a Process Support Engineer from Applied Materials. Applied Materials is a leader in materials engineering solutions, is to produce virtually every new chip and advanced display in the world. There is an internal JMP program inside Applied Materials to help engineers solve engineering issues based on JMP. Today, as a member of the JMP program, I'd like to share how I use JMP to do gauge repeatability and reproducibility analysis on silicon dioxide thickness to assist queue time control and measurement capability evaluation. In wafer fabrication, process engineers rely on metrology tools to monitor each layer to ensure product quality. If measurement results are not accurate, it may lead to quality issue. So how are measurement results affected? If one tool measures the thickness of part several times and variations are huge, the tool repeatability is bad. If another tool measures the same part again, the gap between these two tools is huge. It means reproducibility is bad. The analysis about repeatability and reproducibility of gauges is called GRR analysis. In this project, I take silicon dioxide thickness as an example to introduce how to evaluate measurement capability. Different from other GRR project, I use measurement queue time levels to introduce reproducibility. Here is a general overview of the analysis of the flow. Based on the data collected, we evaluate the GRR performance and conduct root cause analysis to see if there is any further improvement. Then we discuss current processability and explore future opportunities. The silicon dioxide thickness was collected from 15 coupons on a wafer. Each coupon got measured four times after zero, three hours and six hours after silicon dioxide generation. Finally, we got 180 data points according to JMP, MSA Design platform. The thickness spec is from 97-103 angstrom. For GRR performance, we have four success criteria: P/T Ratio, P/TV Ratio, P/V Ratio, and P/M Ratio. Among the four criteria, the numerator is precision, which is calculated from variations due to repeatability and reproducibility. In this project, tolerance is six, and I will use P/T ratio as a success criteria. How about GRR performance? The first model shows P/T ratio is eight, less than 10%. It means the measurement capability is good, while the P/TV ratio is 31%, greater than 30%. It means measurement capability is bad. Why? This is because the part range is too tight, so we cannot trust P/TV ratio, and we need to trust P/T ratio, and it shows measurement capability is good. How about interaction between part and queue time? From crossed GRR model, the interaction only accounts for 0.2% of the tolerance is negligible. With current capability, how possible will we make any mistakes in judging whether a part is within spec or not? The risk that a good part is falsely rejected is called alpha risk. Higher alpha risk increases production cost. The risk that a bad part is falsely accepted is called beta risk. Higher beta risk brings risk to customers. During production, parts at the target have zero alpha and beta risk. Good parts near spec limit have high beta risk and bad part near spec limit have high beta risk. How about alpha and beta risk in the project? Both are zero. Can we trust it? No, this is because all the parts are within spec limit. It is totally different from the actual production. As a result, we cannot rely on the calculated risk. Next time, we should deliberately pick up parts that 90% of which are uniformly distributed in spec range to simulate the true production. The current measurement capability is good, but do we have improvement opportunity in the future? I will use Xbar-S chart to analyze root causes of GRR from repeatability and reproducibility. From the top repeatability chart, X-axis includes 15 parts at three queue time levels. Y-axis is standardization representing the repeatability of each full repeat. Overall, standardization is very stable. That's queue time effect repeatability. The purple line is average standardization for each queue time level, and you can see there is no trend. How about standardization for each part? You can see standardization is lower at wafer center while higher at edge. It may be attributed to higher stress at wafer edge. Repeatability is very stable. How about reproducibility? Most of the parts are beyond measurement arrow red wine, so metrology tool can differentiate between parts. The trending purple line indicates that the average thickness increased by 0.2 angstroms up to 6 hours, far below the spec tolerance six. So long-term degradation risk is low. The M-shape curve is what we want to get best [inaudible 00:07:27] uniformity. If we overlapped three M-curve together, they are parallel, so there is little part to queue time interaction. The repeatability is stable still, and reproducibility is also good compared to our spec tolerance. Still, pair of T-test between the first and fourth repeats are conducted to evaluate short-term degradation risk due to native oxidation. The difference is statistically significant while not practically when comparing to the spec tolerance. There is little concern on any part measurement degradation within four repeats and ANOVA cross-GRR model is safe. In the previous slide, we are talking about measurement capability. How about process capability? Process capability, Cp, is calculated by ICC and P/T ratio. ICC in this case is 0.9. P/T ratio is 8.88%. Final Cp is greater than 2 and falls into the green region. It means process is capable, measurement is capable, and stable within 6 hours. However, because the ICC is highly depending on sample selection, ICC is less reliable compared to P/T ratio, so we better keep ICC fixed and move P/T horizontally as our first move if we want to do some adjustment. Keeping ICC fixed and moving P/T ratio from 0.08 to 0.16, we reach the green boundary. In this case, spec limit is tightened from 6 to 3.35. How about other risk? There are three graphs show the P/T ratio, alpha risk and beta risk, with tolerance reduced from 100% to 30%. As tolerance is being reduced, P/T ratio increased to 29.6%, marginally acceptable. Alpha risk is still under 5%. Beta risk goes beyond 10% when tolerance is reduced by 40%. Based on three criteria, we can tighten tolerance range from 3 to 3.6 and keep P/T ratio around 50%. This graph summarizes how we iteratively and continuously improve process and measurement capability in different scenarios. When Cp is greater than 2, P/T is less than 0.3, marked by the light green stars. We should consider tightening spec until Cp is equal to 1.33 to be ready for improvement. When Cp is less than 1.33, P/T is less than 0.3, marked by the blue star. We should improve process part-to-part capability and reduce the ICC until Cp is equal to 2. When Cp is less than 1.3, with P/T greater than 0.3, marked by the orange star, we should consider optimizing GRR performance to reduce P/T ratio to less than 30% and improving Cp at the same time. That is how we could make decision to improve measurement or process in different cases. This is how we conduct GRR analysis based on different queue time levels. Thank you for listening.
Jiaping Shen.JPG
Labels (2)
Wednesday, October 23, 2024
Executive Briefing Center 9