I'm a Systems Engineer with JMP.
For this presentation, I'll be playing the role of a Quality Consultant.
My name is Jerry Fish. I'm a Systems Engineer with JMP.
I'll be playing the role of a Quality Manager at a manufacturing plant.
Just a couple of references for this presentation upfront.
We're, Jerry and I,
building on a paper that we presented at Discovery Europe 2023.
That's the paper number.
You can go to community.j mp.com
and search for that paper.
Jerry will also be presenting an on-demand video for Discovery on Demand.
That's the paper number to search as well.
Throughout the presentation, we will be referencing
Dr. Donald Wheeler's book, EMP III,
Evaluating the Measurement Process and using imperfect data.
All right, Jerry, ready to kick this off?
Absolutely, go for it.
All right, I'm calling you up. Hi, Jerry.
Thanks for spending a few minutes with me.
As a quality consultant,
I help quality stakeholders like yourself understand and improve processes.
Hi, Jason. Nice to meet you.
I need to let you know, I don't have much time to spend with you right now.
We've got an emergency situation on our
production line that I don't really need to go address.
I know we've got 30 minutes scheduled
and I'm interested in discussing gages, but can we try to make this quick?
I understand completely.
I'll try to make the most of our time today.
To kick things off,
can you tell me a little bit about your company and your quality program?
Sure.
You probably know that Acme has built a reputation with our customers
for manufacturing highest quality products.
We're always concerned with quality.
We have various gages that we use to ensure our quality stays high,
and we've been doing this for many years, so we think we're pretty good at it.
I'm familiar with Acme's high quality reputation.
My consulting team and I have been working
with manufacturing companies like yourself that have a high focus on quality
to advance the use and effectiveness of great gage studies.
One of the things we seek to understand is the monetary cost
associated with the gages used to measure process quality characteristics.
Have you quantified how much any of your gages are costing your business?
I'm not sure I know what you mean.
Well, gages aren't perfect.
They make mistakes.
Sometimes they'll throw away good parts and sometimes they'll pass bad parts.
Unless you have a perfect gage and no one has these, mistakes are inevitable.
Yeah, I get that, I suppose so.
We've done traditional gage studies that say our gages are good.
Well, some of them are actually categorized as adequate for the
AIAG Automotive Industry Action Group, which we use for our testing.
Doesn't that mean they're okay to use?
There's a lot more to the story than just
using the good, adequate, and poor AIAG assessment.
In fact, we could have a long discussion
about the differences between AIAG results and a newer method called EMP 3.
I know you said you have 30 minutes.
It turns out that AIAG, we were trying to sum it up,
it's just much more critical of gages than it needs to be.
The EMP method, which is pioneered by a guy named
Dr. Wheeler, is a more realistic way to evaluate gages.
I appreciate you skipping over that part.
We recognize no gage is perfect.
We also recognize that using a poor gage
risks accepting bad parts, which is bad for my business.
The worse the gage, the higher the risk.
We've got a way that we handle this problem.
Oh, what's that?
Well, we instruct our manufacturing group
to use inspection limits that are inside of the specifications.
If we set them far enough inside these spec limits,
we reduce and essentially we can eliminate shipping bad parts.
Doesn't that fix our problem?
Using inspection limits that are inside the spec limits
is a good way to reduce the risk of passing bad parts
when the gage measures apart near the specification limits.
This is a common strategy, something that companies call
a guard banding or specifying manufacturing instructions.
How do you choose how much to narrow your inspection limits?
Well, frankly, I'm not sure we put that much rigor into it.
We just move our inspection limits in board of the specs by some amount
that our subject matter experts have found work well over the years.
Well, you're in good company.
Many others like you employ the same approach.
There is a downside, though.
Oh, what's that?
By arbitrarily choosing how much to narrow
inspection limits, you may be unnecessarily throwing out good parts.
That's just assuming they're too narrow.
Or you could still have bad parts
that escape inspection if the limits aren't narrow enough.
Have you considered taking a more statistical approach?
We have not.
I've always thought we should have a better way to justify our
manufacturing instructions, but we don't know how to do that.
Well, Dr. Wheeler proposes a method of setting
manufacturing instructions based on the statistics we get from doing
a measurement systems analysis in his book, EMP 3.
I honestly haven't heard of Wheeler's EMP method before.
Is it new?
It's been around for a couple of decades now.
More importantly to my knowledge, Dr.
Wheeler was the first person to publish
a solution to the problem of objectively setting these inspection limits.
His approach uses outputs from MSA, like we were mentioning.
Some of these are probable error,
along with expectations, your expectations were conformance to specifications.
Do any of the modern statistical analysis
package that are on the market today support this EMP method?
I'm glad you asked.
Jmp, which is my preferred data analytics
tool, JMP is a general purpose, easy to use data analytics package
that has many quality and process control features.
Jmp makes quick work of the analytics part of process improvements,
so more time can be dedicated to improving the process itself.
You know that that's not a trivial amount of work, typically.
Jmp's EMP personality and the measurement
systems analysis platform is based on Wheeler's book.
All that stuff is in there.
The calculations for manufacturing instructions are not available in JMP yet.
We built an add-in that does them for you
and provides some tools for cost trade-off analysis.
Okay, good.
I was afraid I might have to come up with my own equations.
I'll order a copy of that book for my team.
If you've already got an add-in
that does that for us, I'm definitely interested.
What does it cost and how does it work?
Well, good news, it's free.
You can find it in the file exchange under
community.Jmp.com online or JMP User Community.
Let me show you how it works. Great.
All right. Here on the left side,
we have to enter a little bit of information about our product.
First is the specification limits.
You know, this is what our customer wants.
This is voice of the customer.
An upper and lower specification limit.
We need to enter what we know about our measurement error.
We're going to use the standard deviation of our measurement error.
This is something that we can get from a measurement systems analysis.
We also need to supply some information
about what we expect the true part distribution to look like.
The parameters mean and standard deviation, give us a description of what
the true part distribution is for our process.
Wait a minute.
I don't know my true part distribution.
That's the problem here.
All I've got is my measured part distribution.
Okay, let's for the moment,
assume that the true part distribution is normally distributed,
and the gage are also normally distributed.
We can use a simple relationship to get the true part variance.
If I did a measurement systems analysis,
I have the variance associated with my gage.
If I did a bunch of measurements
of my process, I have a measured part distribution of my process.
I just have to subtract those, and then I can get the true part variance.
From that, I can get the standard deviation for my true part distribution.
Mean's simple also.
We're just going to assume that that mean
is the measured part distribution less any bias that we have with the gage.
Okay. I think I'm with you so far.
We are still moving towards how much my gage is costing my business, right?
My line still has that problem that I need to address.
Got you. We are.
If you'll indulge me just a little bit longer.
We have more work to do.
First, let's look at the statistics summary.
Probably layer, interclass correlation and
precision to tolerance are ways of describing gage variation
that we get by conducting Wheeler's measurement systems analysis.
These values are computed using
specification limits and gage sigma inputs.
I'll let you read about these calculations when you get Wheeler's book.
Okay.
The capability indices are used to describe how much of the spec
limit range is consumed by process variation.
This is the variation of the true-part values.
Capability is calculated using spec limits and our true-part mean in sigma.
Yes, at Acne, we regularly use CP and CPK.
Nice.
Collectively, you're probably getting an idea that these values provide us
a nice snapshot of gage and process variation.
It's going to help us.
For this example, an ICC of 0.86 is telling us that 14 %
of our measurement variation is a result of gage error.
But our process is not performing so well at a capability of only about 67%.
The last thing that we need is expected conformance at manufacturing instructions.
That's a mouthful.
What do you mean by that?
It is. Let me see if I can unravel that.
This is the probability that a measured part will truly conform to
specifications if the gage measures exactly
at specification limits that we discussed earlier.
Exactly at 55 or 75.
For example, let's say that we want 99% conformance.
To achieve this,
we need to set our manufacturing instructions.
Let me make this a little bit bigger for us here.
We need to make our manufacturing
instructions 57.3 and 72.7.
That's actually narrower than the 55 and 75 that are our spec limits.
Then if we do that,
we can expect a 99% chance that will reject or accurately accept
a part that's near those specification limits.
Okay, let me echo that back to you to see if I've got that right.
If I set my inspection limits for this example to 57.3 and 72.7,
and then if I measure a part with my gage that measures exactly 57.3,
there's a 99% chance that it meets my product specifications of 55 and 75.
That's right. You said it even better than I could.
It's really, in my mind, it's useful to think about measurement risk in this way.
As measurements that we make near spec limits are the riskiest that we encounter.
It also exposes trade-offs between gage effectiveness and process capability.
Think about this, Jerry.
If the process is highly capable,
you can get away with the marginal gage even when measuring near the spec limits.
However, if you have a poor CPK, it may be smart to adjust
the manufacturing instructions to mitigate risk, even if the gage is effective.
That sounds like what my subject matter
experts have been trying to do, but you're saying that we can put a lot
more statistical rigor behind setting those manufacturing instructions.
Your example sets manufacturing instructions to give 99% conformance.
I assume if I want 99.9%
conformance, those manufacturing instructions will narrow further?
They will.
In fact, let's just take a look at how much.
Wow, that's quite a bit.
Yeah.
58, 8, and 71.1. Okay, I think I'm getting it.
Narrowing those manufacturing instructions is great for guaranteeing that I'm
shipping good parts, but I don't necessarily like that.
Why not? In fact, I'll switch it back.
Why don't you like that, Jerry?
Well, the narrower we set those manufacturing instructions,
the more good product I'm going to throw away, aren't I?
Yes, that's right.
I think you're beginning to see
the trade-offs associated with setting manufacturing instructions.
It's pretty cool. Yeah.
Can you tell me how much it's going to cost me when
I throw that good product
away given the selected manufacturing instructions?
Yes.
That's given in the profit loss explorer outline of the add-in.
Before I can answer that, though, we have to enter some values.
Hypothetically, when we're talking about one of your
parts, how much revenue can you expect per part?
Okay, well, that number looks good.
For the sake of this demo or this example, let's just say $100 per part.
Okay.
How many parts do you expect to make?
100,000 sounds good.
That's typical, at least for maybe a year's production.
Nice. Good part run.
How much does it cost you to make a part?
Pretty easy to figure that out.
Out of the $100 that we sell a part for,
it costs us about $70 to make it, so we make a $30 profit.
Last but not least, what is the cost of shipping a bad part?
What's the penalty if a part escapes your inspection process?
I see what you're saying.
That's a little tougher.
There's, of course,
the potential cost of return and the associated cost of repairs.
Those are fairly easy to calculate.
There's also damage to Acme's reputation.
Our customers demand quality,
and if we start putting bad product out the door,
it can quickly get out of hand in a hurry and result in lost future sales.
That's a lot more difficult to calculate.
Really, it would take us a while to scratch our heads and figure that out.
For the sake of argument, let's say that amounts to about $200 per
bad part that we let out of the plant and sell.
All right, great.
Now, I see one more entry box.
Another word, salad.
Number of PEs, manufacturing instructions lie from WSLs.
What in the world does that mean?
All right, you definitely have exposed
another worthy of explanation set of terminology, Jerry.
Let me take a stab at this.
What we're referring to are integer multiples of probable error.
Remember, we talked about that as being one of the outputs from an EMP measurement
systems analysis, talking about the integer multiples
of probable error away from watershed limits.
This is how we will describe the location of manufacturing instructions relative
to watershed limits for the simulation that we're running.
Turns out Wheeler uses the same approach in EMP 3.
When you get the book, take a look at that.
It should be pretty consistent.
He uses watershed limits as opposed
to spec limits to account for granularity of measurements.
If we performed a measurement systems analysis using the EMP
method on this gage, we'd be given a measurement increment.
Watershed limits are essentially half a measurement increment outside the specs.
From a practical point of view, bottom line, manufacturing instructions
should be set from watershed limits as opposed to specification limits.
That just helps us get around that granularity problem.
Now, it's also worth noting that manufacturing
instructions can be set inside or outside the watershed limits.
Now, wait a minute.
That sounds crazy.
Are you saying that there are times when I
might want to set my manufacturing instructions outside of my product specs?
Why would I ever want to do that?
Yeah, it seems counter to it, and I'd argue that it's relatively rare.
There are some conditions where
for strictly economic reasons, you might be better off choosing an even
wider set of manufacturing instructions than your product specs.
We can talk through an example in just a second.
Remember when we were
exploring what the statistics meant, and we talked about this idea that if I
have a process that is very capable, but we have an imperfect gage.
It may be smart for us economically to actually push our spec limits wide
and adjust the inspection process accordingly.
There are circumstances where that might
happen, but yeah, I'd argue that they're pretty rare.
I see. I don't know.
For now, for the sake of the simulation,
let's set the number of probable errors to minus one.
What this means is that we're narrowing our manufacturing instructions.
Let me make this a little bit wider here.
We're talking about narrowing
the manufacturing instructions inside the specification limits.
When we do that, we can
focus your attention down to the expected net profit at manufacturing instructions.
Here we can see that the selected
expected net is the net profit, and we can stand to make about $1.9
million for 100,000 parts for the parameters that we've entered.
Okay, well, at least that's a profit.
I see a maximum expected net that shows an even higher profit.
Is that something I should consider or can I achieve it?
Yes, definitely.
The maximum profit that you can achieve for this measurement process and
this economic set of circumstances is about $2.1 million.
Now, what this is telling us is that we can achieve it by making our manufacturing
instructions equal to the watershed limits.
If I change this to zero,
we can see that we've hit that maximum expected net.
We've just done that by
moving the manufacturing instructions to the watershed limits.
This is optimal for the conditions that we've entered.
That's really interesting.
With any combination like this, potentially, I could find an optimal
where I need to set these manufacturing instructions.
to get me the maximum profit.
Yeah, you're nailing it.
In fact, for the add-in, there's an asterisk on the graph
that identifies what the optimal profit you can achieve is.
Cool.
When you're doing this optimum, where are you getting these numbers?
I don't understand where they might come from.
Sure. They result from trade offs between
revenue per part, cost per part, and damage to your reputation.
Those are the things that we enter.
When we sell a bad part, that's the reputation component.
Given the process and gage parameters.
That's the true part and measurement error components that we entered earlier.
For now, like I said, let's keep the
example to where we're narrowing our
manufacturing inspection limits. It's so funny.
We get all these different limits.
We're talking about manufacturing
instructions, which are inside the watershed limits.
Now that we've got this and we're agreed
that hey, this is the way we want to run this simulation.
Let's take a look at the profit simulation panel.
The matrix down below is where we're going
to start, breaks the profit loss into nine different categories.
Okay, there's a lot of information there.
I'm going to need to study this for a minute.
Let's start in the center, the center of that three by three grid.
S tarting there, of the 100,000 parts that we'll make,
about 91,000 are truly good parts that our gage reads as good and that will ship.
That produces 2.7 million in profits, and I assume that works out to 91,000
parts times we said the revenue per part was $100 minus $70 to make the part.
Is that right?
That's right.
Okay.
Moving straight to the left,
you're predicting we'll have 370 parts that we will ship because the gage says
they're good parts, but these parts are truly below the lower spec.
That's a relatively low number, but it's not really good.
You say it's costing me $63,000.
Am I reading that right?
Yes, indeed, you are.
Okay, and I'm sure that has to do
with that $200 penalty I'm paying for letting bad parts get out.
In this case, it looks like I may lose
about as much by shipping parts that are bad on the high side.
That's right.
You made 2.7 million selling good parts, but now the imperfect gage has cost you
a total of 140,000 because of the letting bad parts out into the field.
Yeah.
Okay, well, that's not very good.
Now you've got several more cells
in that three by three grid that show losses.
What are those about?
The upper left cell shows parts that are truly bad because they are below spec.
And your gage is catching them.
That's good.
It's catching them, it's throwing them out.
The gage is doing what you expected to do.
Downside here is that there are nearly 1,900 parts that you're throwing out,
and each of those are costing you $70 each.
Same thing for the lower rights, similar math there.
That's another, what, $260, $270,000 coming out of my pocket?
I can't blame that on the gage.
Those can only be fixed if I improve my production process.
What about those other cells?
Let's look at the upper center cell.
It's showing that we made a little over
2,200 good parts that were thrown out because the gage
measured below the manufacturing instructions.
How do you feel about that?
Oh, not so great.
We have a separate team of inspectors
that reinspect rejected parts in hopes of reclaiming some of our losses.
This is making me think that even
with that reinspection, we're still throwing out good parts.
I assume the bottom center cell is
the same, good parts that my poor gage says are higher than spec.
Yeah, that's right.
Yet there's another 150-somewhat thousand dollars you're losing.
All of those profits and losses are summarized in the bar chart.
We get a graphic picture of those as well.
I see.
Okay,
but now that assumes that we're using manufacturing instructions that are one
probable error inside the watershed limits.
What if we change the manufacturing
instructions to the optimum you were talking about?
How does that change the profit picture?
Let's see what happens.
Just watch the bars change off
to the right, and we'll talk about those a little bit.
We'll bump that up to the optimum, and what did you see happen?
Yeah, so it looked like the shorter bars
got shorter yet, and the long bar got a little longer.
That's good, and my net profit went up. Good.
We're making more money.
It's probably worth thinking about that
this middle bar because of that reputation cost that you talked about.
Yeah, sure, we may be making more as
a business, but you ought to at least be talking with stakeholders in the company
and asking how we're saving a little bit of money here to be
more profitable, but is that the right thing to do?
Yeah, I get it.
That's interesting.
Yeah, I might choose to use the narrower manufacturing instructions because
that cost to our reputation, you really got to think about that $200 we
put in earlier, that could be a much higher number.
We'd have to sharpen our pencils on that.
Then again, I guess I could experiment with that, with this add-in, right?
I could try different values.
Yeah, absolutely.
I hope you will.
I feel like that would be valuable.
Great.
I think I'm starting to see now how
changes in process capability and gage effectiveness and how I set the inspection
limits may have a big impact on my profitability.
That's what we hoped for when we developed this add-in.
Considering profitability when
answering questions about gage and process capabilities
could be really helpful when you're trying to justify improvement projects,
or it could also give you a lot of confidence that the process and gage
are delivering the results that you desire.
I must say, Jason, I am impressed.
However, I feel like I need to muddy the waters a little bit.
This is all great for normal
distributions, simple gage errors, et cetera.
Those calculations, as you've shown before, are easy.
What if I have linearity or bias problems with my gage?
Or if I have a skewed part distribution,
getting a true part distribution out of the measured part distribution becomes
a lot more difficult than just using that simple formula you showed earlier, right?
Can you even do that?
Today, no.
Really, we're limited to normal distributions, but we're almost there.
It's in the plan.
In fact, it's the next step in our development plans for the add-in.
I do have a colleague who's presenting
at JMP's online Discovery Conference who has done an on-demand presentation
for estimating the true part distribution based on measured part distributions
that may be non-normal based on gage characteristics.
It handles even arbitrary distributions,
has some common ones and arbitrary distributions.
We will be incorporating these
capabilities into the add-in, like I mentioned.
It's going to be sometime soon.
You should definitely right now go check out the talk.
Yeah, it sounds like an interesting talk by an interesting presenter.
This all sounds interesting. I will, Jason.
Now, we've covered a lot of material today.
Would you mind quickly taking me back
through the add-in again to make sure I've got it?
Absolutely. Where we started our conversation was
supplying a little bit of information about our measurement process.
We entered specification limits.
Again, those are the voice of the customer that we talked about.
We talked about getting an idea of what
our measurement error is from a measurement systems analysis,
and then using the standard deviation from that study for a gage error.
We also talked about supplying some
information about the true part distribution.
The parameters mean and sigma,
we talked about ways that we can infer that from the measured part distribution.
We talked about the statistics in the add-in
being a snapshot into our gage capability as well as our process capability.
I think that's important as we begin
to have these conversations about how much the gage is going to cost us.
We walked through ideas around moving our specification limits in these
manufacturing limits, narrowing them or even expanding them
to reduce risk when we're measuring at the specification limits.
There are other bells and whistles in the add-in that we didn't have time
to show, but these are just ways of really illustrating
these risks by now overlaying true-part distributions, assuming we're measuring
the true-part near the manufacturing instructions.
Profit Loss Explorer was our way
to simulate the costs associated with our measurement errors.
It gives us the opportunity to explore what ifs.
If we are making a certain product and
product has certain costs associated with it, how much profit can we expect if
we're running manufacturing instructions at different multiples of probable error.
We wrapped up our conversation
with an in-depth conversation about those profits and losses
and how the goal of our business and the reputation of our business should
be considered when we're making those trade-offs as well.
Nice.
Thank you.
I think that summarized it really well.
Now, besides using estimated true-part
distributions, is there anything else that will be developed for the add-in?
Well, we also have already developed this misclassification Explorer.
It's just another way of looking at gage errors and our process variation.
We didn't have enough time to go into this component of the add-in.
We plan on explaining this a little bit
more in the near future, very likely as we start incorporating some
of those non-normal distributions that I mentioned.
Cool.
That's fantastic.
What about other training opportunities?
What we've shown here today is what our
add-in currently does, short of the misclassification explore that we skip.
We'd like to get more into the nuts and bolts
of the equations behind the add-in and the approaches.
We plan a series of online talks coming in the near future where we can explore
some of those differences between the AIAG classification methods and EMP and how EMP
can give us more realistic and useful information about our gage.
We'll also spend some more time talking about gage performance curves.
We talked about this
classification graphs, but we'll introduce you to some
traditional gage performance curves as well as conformance curves.
Actually, it's conformance we talked about classification we'll get to.
Look for those in the near future.
Great.
Who else should we thank for this effort so far?
Well, Brady, Brady.
We should have a picture of Brady right here.
Yeah, that would have been good.
Brady, Brady is the brains behind the add-in.
Unfortunately, he couldn't join us, but
definitely is heavily involved, and we have the add-in and the code to thank.
Great.
Thanks to Brady, and thanks to all of you for attending and those who will be
watching this video and using the add-in later.
For the live demo, we'd be happy to answer questions.
Otherwise, if you put your questions in the chat area below the presentation,
we'd be glad to get back with you as soon as we can.
Thank you. Thank you very much for attending.