Choose Language Hide Translation Bar

Variance Budgets (2020-US-45MP-531)

Level: Beginner

 

Ronald Andrews, Sr. Process Engineer, Bausch + Lomb

 

How do we set internal process specs when there are multiple process parameters that impact a key product measure? We need a process to divide up the total variability allowed into separate and probably unequal buckets. Since variances are additive, it is much easier to allocate a variance for each process parameter than a deviation. We start with a list of potential contributors to the product parameter of interest. A cause-and-effect diagram is a useful tool. Then we gather the sensitivity information that is already known. We sort out what we know and don’t know and plan some DOEs to fill in the gaps. We can test our predictor variances by combining them to predict the total variance in the product. If our prediction of the total product variability falls short of actual experience, we need to add a TBD factor. Once we have a comprehensive model, we can start budgeting. Variance budgeting can be just as arbitrary as financial budgeting. We can look for low hanging fruit that can easily be improved. We may have to budget some financial resources to embark on projects to improve factors to meet our variance budget goals.

 

 

Auto-generated transcript...

 

Speaker

Transcript

Ronald Andrews Well, good morning or afternoon as the case may be. My name is Ron Andrews and topic of the day is variance budgeting.
Oh, I need to
share my screen.
And there's a file, we will be getting to.
And we'll get to start with
PowerPoint.
So variance budgeting is the topic.
I'm a process engineer at Bausch and Lomb; got contact information here.
My supervision requires this disclaimer. They don't necessarily want to take credit for what I say today.
Overview of what we're going to talk about
What is the variance budget?
A little bit of history.
When do we need one?
We have some examples.
We'll go through the elements of the process, cause and effect diagram, gather the foreknowledge, do some DOEs to fill in the gaps,
Monte Carlo simulations, as required.
And we've got a test case will work through.
So really, what is a variance budget? Mechanical engineers like to talk about tolerance stack-up.
Well tolerance stack-up is basically a corollary Murphy's Law, that being all tolerances will add unit directionally in the direction that can do the most harm.
Variance budget is like a tolerance stack-up, except that instead of budgeting the parameter itself, we budget the variance -- sigma squared.
We're relying on more or less normal shape distributions, rather than uniform distributions.
Variances are additive, makes the budgeting process a whole lot easier than trying to budget something like standard deviations.
Brief example here.
If we used test-and-sort or test-and-adjust strategies, our distributions are going to look more like these uniform distributions.
So if we have the distribution with the width of 1 and one with a width of 2 and other with a 3, we add them all together, we end up with a distribution with a width of pretty close to 6.
In this case, we probably need to budget the tolerances more than the variances.
...If we rely on process control, our distributions will be more normal.
In this case, if we have a normal distribution with a standard deviation of 1, standard deviation of 2, standard deviation of 3, we add them up, we end up with standard deviation of 3.7, lot less than six.
So we do the numbers 1 squared plus 2 squared plus 3 squared equals essentially 3.7 squared.
Now to be fair, on that previous slide, if I added up these variances, they would have added up to the variance of this one.
But when you have something other than a normal distribution, you have to pay attention to the shape down near the tail. It depends on where you can set your specs.
So,
What is the variance budget?
Non normal distributions are going to require special attention and we'll get to those later. For now variance budget is kind of like a financial budget. They can be just as arbitrary.
There only three basic rules.
We translate everything into common currency. Now we do this for each product measure of interest, but we translate all the relevant process variables into their contribution to the product measure of interest.
Rule number two is fairly simple. Don't budget more than 100% of the allowed variance. Yeah, sounds simple. I've seen this rule violated more than once in more than one company.
Number three. This goes for life in general, as well as engineering, use your best judgment at all times.
Little
bit of history. This is not rocket science. Other people must be doing something similar. I have searched the literature and I have not been able to find published accounts of a process similar to this. I'm sure it's out there, but I have not found any published accounts yet.
So for me the history came back in the 1980s, when I worked at Kodak with a challenge for management.
Challenge was produce film with no variation perceived by customers.
Actually what they originally said produce film with no variation.
no perceivable variations.
They define that as a Six Sigma shift would be less than one just noticeable difference.
Kodak was pretty good on the perceptual stuff and all these just noticeable differences were defined, we knew exactly what they were.
For a slide film like Kodachrome, which is what I was working on the...
that's what I was working on at the time, color balance was the biggest challenge.
Here, this streamline cause and effect diagram, color balance is a function of the green speed, the blue speed and the red speed.
Now I've sort of fleshed out one of these legs. The red speed, I got the cyan absorber dye and then one of the emulsions as the factors that contribute to the speed of that, that affects the red speed, that affects the color balance.
This is a very simplified version. There are actually three different emulsions in the red record, there are three more in the green record. There are two more in the blue record. Add up everything, they're 75 factors that all contribute to color balance.
These are not just potential contributors. These are actually demonstrated contributors. So this is a fairly daunting task.
So moving on to when we need a variance budget.
Get a little tongue in cheek decision tree here. Do we have a mess in the house? If not, life is good. If so,
how many kids do we have? If one, we probably know where the responsibility lies. If more than that and we probably need a budget.
This is an example of some work we did a number of years ago on a contact lens project at Bausch and Lomb.
This is long before it got out the door to the marketplace. We were having trouble meeting our diameter specs.
plus or minus two tenths of a millimeter
We were having trouble meeting that.
We looked at a lot of sources of variability and we managed to characterize each one.
So lot to lot. And this is with the same input materials and same set points, fairly large variability.
Lens to lens within a lot, lower variability.
Monomer component No. 1, we change lots occasionally,
extreme variability. Monomer component No. 2, also had a fairly large variability.
Now we mix our monomers together and we have a pretty good process with pretty good precision. It's not perfect and we can estimate the variability from that.
That's a pretty small contributor.
We put the monomer in a mold and put it under cure lamps to ??? it
and the intensity of the lamps can make a difference. There we can estimate that source of variability as well.
We add all these distributions up
and this is our overall distribution.
It does go belong...beyond the spec limits on both ends.
Standard deviation of .082
And as I mentioned, spec elements of plus and minus .2 that gives us a PPk of .81. Not so good.
Percent out of spec estimated at 1.5%
It might have been passable if it was really that good, but it wasn't.
This estimate assumes each lens is an independent event. They're not. We make the lenses in lots
and there's...every lot has a certain set of raw materials in a certain set of starting conditions.
That within a lot, there's a lot of the correlation.
And
two of the components I mentioned, two monomer components that had sizable contributions, there's
looking here, occasionally you can see the yellow line and the pink line. These are the variability introduced by these two monomer components.
When they're both on the same side of the center line, they push the diameter out towards the spec limits and we have some other sources of variability that add to the possibilities.
Another problem is that
our .2 limit is for an individual lens. We did this...we disposition based on lots. And so this plot predicts lot averages, though, when we get a lot average out to .175, chances are we're going to have enough lenses beyond the limit that failed a lot.
So in all, added up
our estimate is 4% of the lots are going to be discarded.
And they're going to come in bunches. We're going to have weeks when we can't get half of our lots through the system.
So this is non starter. We have to make some major improvements.
To the lot-to-lot variability from two monomer components contributed a good chunk of that variability.
We looked and found that the overall purity of Monomer 1 was a significant factor and certain impurities in Monomer 2, when present, were contributors.
Our chemists looked at the synthetic routes for these ingredients and found that there was a single starting material that contributed most of the impurities.
They recommended that our suppliers distill this starting ingredient to eliminate the impurities.
That made some major improvements.
We also put variacs on the cure lamps to control the intensity.
Lamp intensity was not a big factor, but this was easy. And when it's easy, you make the improvement.
Strictly speaking, this was a variance assessment, rather than a variance budget. We never actually assigned numeric goals for each component.
This is back...we're kind of picking the low-hanging fruit. I mean, we found two factors that pretty much accounted for a large portion of the variability
Maybe we need a little bit better structure to reach the higher branches, now that we need to reach up higher.
Current status on lens dimension, lens diameter.
PPk is 2.1. The product's on the market now, has been for a few years. This is not the problem anymore.
We've made major...major improvements in these momoer components. We're still working on them. They still have detectable variability; detectable, but it hasn't been a problem in a long time.
So the basic question is, what do we do
to apply data to a variance budget?
Maybe reduce that arbitrariness a little bit.
We have to start by choosing a product measure in need of improvement.
We need to identify the potential contributors, cause and effect diagrams, a convenient tool.
We need to gather some foreknowledge. We need to know the sensitivity.
The product measure divided by the process measure; what's the slope of that line?
We, we are going to need some DOEs to fill in the gaps.
We need to estimate the degree of difficulty for improving some of these factors.
And
we estimate the variance from each component and then we divide that variance, the total variance goal among the contributors.
Sounds easy enough.
Let's get into an example.
let's say we're we're working on a new project. And along the way, we have a new product measure called CMT (stands for cascaded modulation transfer) to measure overall image sharpness. Kind of important for contact lenses.
Target is 100, plus or minus 10.
We want a PPK of at least 1.33
That means standard deviation's got to be 2.5 or less.
Variance has got to be 6.25 or less.
What factors might be involved?
Let's think about a cause and effect diagram.
We can go into JMP and create a table. We start by listing CMT
in the parent column. Then we list each of our manufacturing steps in the child column.
And then we start listing these child factors over on the parent's side and then we start listing subfactors. These subfactors are obviously generic and arbitrary, the whole thing's hypothetical.
And we can go as many levels as we want. We can have as many branches in the diagram as we care to, but
we've identified 14 potential factors here.
So we go into the appropriate dialog box, identify the parent column and the child column. Click the OK button and out pops the cause and effect diagram.
Brief aside here. I've been using JMP for 30 years now. I have very, very few regrets. This is one of them. And my regret is, I only found this last year.
I don't know, actually, when this tool was implemented. I wish I had found it earlier because this is the easiest way I found to generate a cause and effect diagram.
So we need to gather the sensitivity data.
Physics sometimes will give us the answer.
In optics, if we know the refractive index and the radius of curvature, that can give us some information about the optical power of the lens.
Sometimes physics, oftentimes we need experimental data.
So,
ask the subject matter experts. Maybe somebody's done some experiments that will give us an idea.
We're going to need some well-designed experiments because no way have all 14 of those factors been covered.
Several notches down on the list, in my opinion, is historical data.
And if you've used historical data to generate models, you know, some of the concerns I'm nervous about. We need to be very cautious with this. Historical data,
it's usually messy; it has
a lot of misidentified numbers, sometimes things in the wrong column, it needs a lot of cleaning. There's also a lot, also a lot of correlation between factors.
Standard practice is to reserve 25% of the data points randomly, reserve that data for confirmation, generate the model with 75% of the data, and then test it with a 25% reserve data.
If it works, maybe we have something worth using. If not, don't touch it.
So gathering foreknowledge, we want to ask subject matter experts independently to contribute any sensitivity data they have.
I'm taking a page from a presentation last year at the Discovery Summit by Cy Wegman and Wayne Levin.
This is their suggestion in gathering foreknowledge to avoid the loudest voice in the room rules syndrome.
Sometimes there's a quiet engineer sitting in the back who may have important information to impart,
may or may not speak up. So we want to get that information. Ask everybody independently to start with.
Then get people together and discuss the discrepancies. There will be some.
Where are the gaps? What parameters still need sensitivity or distribution information?
What parameters can we discount?
I'd like to find these.
What parameters are conditional?
Doesn't happen very often, but in our contact lens process, we include biological indicators in every sterilization cycle.
These indicators are intentionally biased so that false positives are far more likely than false negatives. When we get a failure in this test,
we sterilize again. We know our sterilization routine was probably right, but we sterilize again. So sometimes we sterilize twice. That can have a small effect on our dimensions. It's small, but measurable.
So we're going to need to plan some experiments to gather the sensitivities for things we don't know about.
And we'll look at production distribution data; use it with caution to generate sensitivity. We can use it to generate information on the variability of each of the components and the overall variability of the product measure of interest.
We need to do some record keeping along the way. We can start with that table we used to generate the cause and effect diagram, add a few more columns.
Fill in the sensitivities, units of measure, various columns. Any kind of table will do. Just keep the records and keep them up to date.
We're going to need some DOEs to fill in the gaps.
There are some newer techniques -- definitive screening designs, group orthogonal super saturated designs --
provide a good bang for the buck when the situation fits.
Now in this particular situation, we got 14 factors.
We asked our subject matter experts.
Some of them have enough experience to predict some directional information, but nobody has a good estimate of the actual slopes. So we need to evaluate 14 factors. I'd love to run a DSD that doesn't require 33 runs, I don't have the budget for it.
So we're going to resort to the custom DOE.
So,
go to the custom DOE function and then...been using PowerPoint for long enough now...time we demonstrated a few things live in JMP.
That would go to DOE custom design.
And
you don't have to, but it's a very good practice to fill in the response information
(if i could type it right).
Match target
from 90 to 110.
Importance of 1, only makes a difference if we have more than one response. The factors.
I have my file, so I can load these quickly there. Here we have all 14
of the factors.
This factor constraints,
I've never used it. But I know it's there if some combination of factors would be dangerous. I know that we can disallow it.
The model specification. This is probably the most important part.
This is basically a screening operation. We're just going to look at the main effects.
Now our subject matter experts suggested the interactions are not likely.
And
nonlinearity is possible but not likely to be strong.
So we're going to ignore those for now, at least for the screening experiment.
We don't need to block this.
We don't need extra center points.
For 14 main effects, JMP says a minimum of 15, that's a given, default 20.
I've learned that if I have a budget that can run the default, that's a good place to start. I can do 20 runs; 33 was too much. I can manage the 20.
Let's make this design.
I left this in design units. There's a hypothetical example. I didn't feel like replacing these arbitrary units with other arbitrary units.
Got a whole suite of designed evaluation tools,
a couple that I normally look at. The power analysis.
If the root mean square error estimate of 1 is somewhere in the ballpark, then these power estimates are going to be somewhere in the ballpark. .9 and above, pretty good. I like that.
The other thing I normally look at is the color map on correlations.
I like to actually make it a color map.
And it's kind of complicated. We got 14 main effects, and I honestly haven't counted all the two way interactions. What we're looking for is confounding effects, where we have to red blocks in the same row.
Well, I don't see that. That's good. We've got some dark blue where there's no correlation. We've got some light blue where there's a slight correlation. And we have some light pink where
maybe it's a .6 correlation coefficient.
This is tolerable. As long as we don't have complete confounding, we can probably estimate what's what, what's causing the effect.
Now this is good. Move on, make the table.
Well, this is our design.
Got the space to fill in the results.
I'm going to take a page from the Julia Child school of cooking.
Do the prep for you and then put it in the oven and then take a previously prepared file out of the oven that already has the executed experiment. These are the results.
CMT values, we wanted them between 90 and 110.
We got a couple here in the 80s. There's 110.5, we've got 111 here.
Looks like we have a little work to do.
Let's
analyze the data.
Everything's all done for us. There's a response. Here's all the factors. We want the screening report. Click Run.
r squared .999.
Yeah, you can tell this is fake data. I probably should have set the noise factor a little higher than this.
The first six factors are highly significant; the next eight, not so much.
I was lazy when I generated it.
I put something in there for the first six.
Now, typically we eliminate the insignificant factors.
So we can either eliminate them all at once.
I tend to do it one at a time.
Eliminate the least significant factor each time and see what it does to the other numbers. Sometimes it changes, sometimes it doesn't.
Eliminate this one and it looks like Cure1 slipped in under the wire, .0478.
It's just under .05. I doubt that it's a big deal, but we'll leave it in there.
So we look at the residuals; that's kind of random, that's good. Studentized residuals, also kind of random.
We need to look at the parameter estimates.
This is what we paid for.
These are the
the...regression coefficients are the slopes we were looking for. These are the sensitivities.
That's why we did the experiment. I'm a visual kind of guy, so I always look at the prediction profiler.
And one of the things I like here...well, I always look at the...the plot of the slopes and look at the...
I look at the confidence intervals, which are pretty small. Here you can just barely see there's a blue shaded interval. I also like to use the simulator when I have some information about the input, that we can input the variability for each of these.
Now if you'll allow me again use the Julia Child approach and go back to
the previously prepared example where I've already input
the variations on each one of these. From Mold Tool 1, I input an expression that results in a bimodal distribution.
And for Mold Tool 2, input a uniform distribution.
And I gotta say, in defense of my friends in the tool room,
bimodal distribution only happens in a situation...what happened last month,
where the tools we wanted to use were busy on the production floor, so for experiment, we use some old iterations. We actually mixed iterations.
When that happens, we can get a bimodal distribution.
This uniform distribution, never happens with these guys. They're always shooting for the center point and usually come within a couple of microns. Other distributions are all normal.
Various widths. In one case, we had a bit of a bias to it.
These are the input distributions. Here's our predicted output.
Even though we had some non normal starting distributions, we have pretty much a normal output distribution.
It does extend beyond our targets.
We kind of knew that.
Now, the default when you start here is 5,000 runs. I usually increase it, increase this to something like 100,000. It didn't take any extra time to execute, and it gives you a little smoother distributions.
It also produces a table here, we can make the table.
Move this over here. Big advantage of this
is that we can get
(don't want this CMT yet)...let's look at the distributions of the input factors.
This is a bigger fancier plot. This is our bimodal distribution, uniform, these various normal distributions, various widths, this one has kind of a bias to it.
So we can take all those and
we added them up.
We look at this
and we have a distribution.
It looks pretty normal. Even though some of the inputs were not normal.
We can use conventional techniques on this. So when we start setting the specs,
it does extend beyond our spec limits. So we're going to need to improve, make some improvements in this. Scroll down here. Look at the capability information.
PPk a .6.
That's a nonstarter. No way is manufacturing going to accept a process like this.
So we need to make some significant improvements.
So go back to the
PowerPoint file.
And I scroll through the slides that were my backup in case I had a problem with
Live version of JMP.
Because of me having the problem, not JMP.
So here we have the factors.
Standard deviations come from our preproduction trials, estimate the variability. The sensitivities, these are the results from our DOE.