OK, I'll try one more time specifically addressing your comments.
There is no disagreement here. I 100% agree with you that consistency must be confirmed. Again, my point is that neither method prevents one from plotting the data to assess consistency. Therefore, this requirement can not be used as an argument that one should or must use the range-based method instead of the traditional method for computing std dev.
What data are you plotting for the SD equation? What rational subgroups do you have? To get a range, you have to subgroup (even a MR has a subgroup of consecutive measures, though MR charts are not very useful for components of variation studies). The rational subgroups (Mandatory element of control chart method) provide a way to separate and assign different sources of variation. They provide the practitioner with a quantitative means to prioritize where they need to focus the next iteration of their study (which component has greater leverage).
Although I think this is tangential to my question, I must disagree with your statement here. The Six-Sigma process relies on control charts extensively. This is where I was first introduced to the notion of approximating std dev using the range-based method. Sadly, the rationale was not explained very well there either. At the time I just shrugged my shoulders and moved on, but now I'm more curious.
You miss my point. Control chart methodology was invented long before Six Sigma came along. It is unfortunate you did not have an instructor that could explain the reasons better (apparently I am having trouble as well). Control charts are an analytical tool (as originally intended, not enumerative). They are used to help understand causality. We really don't care about how exact the numbers are compared to the true population statistics. We want to approximate efficiently to understand how and why the numbers vary. Knowing if the number is exactly correct doesn't help for analytical problems. (see the Deming quote I already posted...and perhaps read his paper I referenced)
The first thing that comes to mind is "Should the end goal of how I'm using this data matter?" I can still see no reason why we should accept alternative approximative methods for a term such as std dev that has (IMO) already been well-defined and can be easily computed. That is unless, there is a clear advantage to using the approximation method over the actual formula. You did aim in that direction when you mentioned earlier that the approximation tends toward the "true" value more rapidly. I would still like to understand that proof though.
“Data have no meaning in themselves; they are meaningful only in relation to a conceptual model of the phenomenon studied”
Box, G.E.P., Hunter, Bill, Hunter, Stu “Statistics for Experimenters” So, YES, the purpose of the study does dictate how the data should be acquired.
What exactly do you mean by "the actual formula"? You must realize this is also an approximation. Ranges are quite efficient estimates when using control chart method. Again we don't need to know the exact number. we want an efficient means of assigning where is the leverage in our investigation.
This is interesting. If we assume a memoryless, Markov process, then the probability of one measurement is entirely independent of any previous measurement. Therefore, the likelihood that a sequential set of measurements alternates between high, low, high, low... is the same as the likelihood a set of measurements will appear ordered from low to high. In the former case, the average 2-pt moving range will be maximized. In the latter, it will be minimized. Both cases are equally likely, and yet they could provide drastically different computations of the std dev using the range-based method. Are we ok with this? Again, I would argue this is clear reason why the range-based method should not be used. Using a methodology that would vary despite all the data points having the same value, just ordered differently, is still very puzzling to me.
MR charts are not very useful for components of variation studies. They are used when there is NO rational subgroup (which, of course, is necessary for control chart method). Over time all of the x's will change within subgroup, so there really is no within and between leverage question to be answered. MR charts are useful for assessing stability, though there is some likelihood of autocorrelation They can be used on relatively small data sets (like when you are at the top of a nested sampling plan). Again, in analytical problems we don't care about accuracy as much as precision. Your situation is completely theoretical. Do you have any actual data sets where you have found one method to be superior than another? Have you actually taken data sets from problems you've worked on and compared how each estimate would affect conclusions from your studies? This would be worthy of discussion and debate.
"What does this situation mean in plain English? Simply this: such criteria, if they exist, cannot be shown to exist by any theorizing alone, no matter how well equipped the theorist is in respect to probability or statistical theory. We see in this situation the long recognised dividing line between theory and practice… the fact that the criterion we happen to use has a fine ancestry of highbrow statistical theorems does not justify its use. Such justifications must come from empirical evidence that it works. As the practical engineer might say, the proof of the pudding is in the eating"
Shewhart, Walter A. (1931) “Economic Control of Quality of Manufactured Product”, D. Van Nostrand Co., NY
You might be mistaking my colleague's comments as my own. They are the one's who need to be convinced that JMP is the right platform, not me. I'm just coming here to get the ammunition I need to do this. At the same time, I do feel their questions/concerns are legitimate and would like to help them understand.
You wrote the comments, I have no idea of the context. I am conversant in all three software programs, though my bias is to use JMP. If you want to convince users of the advantages of JMP over Minitab or Excel, there are several papers I suggest you read (you can google as well as me).
For example https://statanalytica.com/blog/jmp-vs-minitab/ (simply the first one that showed up)
The argument over which program to use for estimates of enumerative statistics is not useful. I have been successful convincing many Minitab users of the advantages of JMP in real world problems (better graphical displays of data...graph builder, ease of pattern recognition...color or mark by column, selecting a data point in one chart highlights it is every chart, REML, etc.). Unfortunately, in reality, the folks that make the decisions regarding which to use as a corporation are folks that have no idea how to use statistics or the software.
"All models are wrong, some are useful" G.E.P. Box