When you perform a gauge linearity study and the "mean" bias increases as you inrease along the range of measurements you are said to have a "linearity" problem with the gauge as indicated by the positive slope in the bias-scale graph. If the bias graph slope is zero, random bias, then you are said not to have a linearity problem with the gauge. But what do you can a graph with no linearity problem of the "average" bias over the range of the gauge but you do have increasing "variation" of the multiple measurements you take along the scale? In other words, if you take 10 different measurements at each point along the scale, the graph looks like the measurements are included in a in a cone shape. Do you just call it a gauge repeatability problem with a special cause of variation or is there a name for it?
Increasing Variation Non-bias
... View more