Choose Language Hide Translation Bar

Community Trekker

Joined:

Jun 19, 2014

## Gauge variation, linearity and bias

When you perform a gauge linearity study and the "mean" bias increases as you inrease along the range of measurements you are said to have a "linearity" problem with the gauge as indicated by the positive slope in the bias-scale graph.  If the bias graph slope is zero, random bias, then you are said not to have a linearity problem with the gauge.  But what do you can a graph with no linearity problem of the "average" bias over the range of the gauge but you do have increasing "variation" of the multiple measurements you take along the scale?  In other words, if you take 10 different measurements at each point along the scale, the graph looks like the measurements are included in a in a cone shape.  Do you just call it a gauge repeatability problem with a special cause of variation or is there a name for it?

Increasing Variation  Non-bias

3 REPLIES 3

Staff

Joined:

Sep 10, 2014

## Re: Gauge variation, linearity and bias

It's a good question! You are correct that technically, in the measurement standard documents, linearity is defined as the absence of variability due to bias over the measurement range. In the JMP course notes for the MSA course, it says "Some authors include the absence of variability due to reproducibility over the measurement range." In my experience, companies that notice this problem call it linearity, and other companies don't notice the problem, so don't even ask the question. I have not heard of another term for it, in MSA particularly.

Community Trekker

Joined:

Jun 19, 2014

## Re: Gauge variation, linearity and bias

Here is my problem.  A gauge R&R is based on ANOVA, either 2 way or 3 way.  One of the ANOVA's assumption is equal variances.  So in the case where the variance changes over the range of the measurement device, will this cause a problem with the gauge R&R results?

Staff

Joined:

Sep 10, 2014