Hi. I have a question for any 6-sigma folks out there. In the Quality and Process Methods section of the User Manual, under "Statistical Details for Control Chart Builder", the standard deviation seems to be approximated using the range. For example, for XmR charts, the standard dev formula is:
where MR-bar is the average of the moving range and d2 is a value from a lookup table.
I'm not unfamiliar with this methodology for computing sigma and have seen it used in several other process control references. My question, though, is why? That is, why do we approximate it using this method, when we could instead simply use the traditional formula for computing sigma:
I understand that technically, any computation of sigma is an estimate of the "true" value, so perhaps one might argue that one formula is as good as another. However, the latter formula comes from the actual definition of standard deviation. I also understand there's the question of biased vs unbiased and the Bessel correction factor (N-1), but I feel the same issue would apply to both formulas, yes? Lastly, I understand that the latter formula requires more computation steps. Computation speed might have been an issue many decades ago, but that doesn't seem like and especially valid reason anymore.
Perhaps I'm wrong about one or more of my assumptions above, but if so, I'd love to know why. I've searched around the internet and other references, and I've never been able to come up with a satisfactory answer. Does anyone know?
Thanks, and Happy New Year!