cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Check out the JMP® Marketplace featured Capability Explorer add-in
Choose Language Hide Translation Bar
DSchweitzer
Level II

Gauge R&R interpretation when accounting for Crossed effects

The MSA used to calculate GaugeR&R recommends selecting Crossed as the Model type. This is a slightly different method compared to evaluating just the Main effects of Appraiser and Part which many spreadsheets and tools use to calculate Gauge. I understand the concept of crossed effects, but I'm having a difficult time understanding how the crossed variance calculation is made (i.e. if I want to hand calculate to understand what drives this error source). It seems that under certain situations the crossed effect can lead to erroneous (unexpected) results and we should not use the result to judge acceptability of the gauge.

In the analysis below I have a very good (almost ideal) gauge which yields essentially no variation within the resolution of the measurement step size. Appraiser 2, however, measured part 9 consistently 0.005V lower compared to all other Appraisers. This error is well within the 0.9V tolerance, yet the Gauge analysis using Crossed effects reports that 61.21% of the measurement variation is coming from a crossed effect coming from this one appraiser and part. This doesn't make physical sense to me and seems to be a special corner case because of the low/lack of variation.

 

At some point there must be a crossover where this effect can start to falsely influence the Gauge results and I'm try to understand how to  interpret results and in what cases the analysis should be changed to avoid this artifact. It seems that there may be situations where the math may lead to incorrect conclusions and the Crossed effect should not be used.

 

DSchweitzer_0-1730407392147.png

 

3 REPLIES 3
statman
Super User

Re: Gauge R&R interpretation when accounting for Crossed effects

Here are my thoughts.  First a crossed effect is an interaction.  This means that the effect of one of the terms in the model depends on another term in the model.  

You have stumbled upon a significant issue with all quantitative analysis.  That is doing quantitative analysis (e.g., ANOVA) prior to analyzing the data for "unusual dat points and patterns" prior to interpreting the results of the quantitative analysis.  You should seek to understand why appraiser 2 got different results for UUT #9.  I don't see how you conclude consistently, I only see one data point, but none-the-less, why?  Within this data set, Deming might call this data point "special" and Shewhart would call it "assignable".

 

The percentages are of the tolerance.  I personally don't think this is the only comparison to judge a measurement system.  IMHO, this is the least important as tolerances are independently derived and typically without any insight into the process making the product (or the amount of product variation).  What you want your measurement system to be able to do is quantify the variation in the product so that variation may be reduced or predicted.  To assess this, you must consider:

1. the discrimination (effective resolution),

2. the stability/consistency,

3. the precision repeatability and perhaps reproducibility,

of the measurement system.

 

The software is going to want to account for this one unusual data point.  Since it is, as you see from the plot, only associated with one appraiser, that would be quantified as an interaction effect (i.e., the effect of appraiser depends on which UUT is measured).  Don't be misled by the size of the % (notice the Part variation (UUT) accounts for 720.79 % of tolerance!).  You don't have much variation in the appraiser, most of the variation is due to UUT (see the variance component plot).

 

BTW, one of the issues with crossed studies is that they are usually inadequate for assessing consistency.  Control charts don't handle crossed effects.  For understanding of consistency, nested studies are preferred.

"All models are wrong, some are useful" G.E.P. Box

Re: Gauge R&R interpretation when accounting for Crossed effects

for R&R purposes, random effects by definition, does it not make sense to run a nested design only.   it is not a matter of preference, but a matter of objective. In other words, the random factor effect is best estimated from a nested design. Thoughts?

 

 

 

statman
Super User

Re: Gauge R&R interpretation when accounting for Crossed effects

Sorry, I don't understand this "for R&R purposes, random effects by definition"?  The R & R stands for precision repeatability and reproducibility.  What do you mean by "random effects by definition"? I also don't understand what you mean by "random factor effect is best estimated from a nested design"?  Nested, or hierarchical designs are quite useful for components of variations studies and particularly useful when assessing stability/consistency.  If that is the objective of the study, then yes, a nested study is most effective.  If however, you are trying to understand causal structure and develop predictive models, crossed studies are more effective (e.g., DOE).

 

There is no one way to perform any analysis.  The appropriate analysis is a function of what questions you are trying to answer and completely dependent on how the data was acquired.

"All models are wrong, some are useful" G.E.P. Box