cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
scottahindle
Level IV

How to create a range control chart based on the median range and not the average range

Is there any easy way to use JMP 13 to create a range chart based on the MEDIAN range and not the average range? (Hence the upper range limit would also be based on the median range as well.)

If not I assume one option is to use the run chart and adapt it through scripts. I'd put the range values for all the subgroups into a list, find the median of this list, and then take it from there (i.e. adding reference lines through scripting with the central line the median range and the upper range limit the median range multiplied by the appropriate bias correction factor). 

Any comment / tip would be welcome. 

Scott.

10 REPLIES 10
statman
Super User

Re: How to create a range control chart based on the median range and not the average range

Scott,  I love your thought experiment, so I will continue it.  

First I will reiterate:  Range charts answer the question: Is the variation within subgroup (which is a function of the X's changing within subgroup) consistent, stable (therefore predictable)?  If not, you should examine the X's changing within subgroup for their possible effect on Y.  The control chart, in this case the range chart, is meant to help identify those "unusually large sources of variation".  Sometimes the special cause (Don likes to call it assignable cause) is obvious and you don't really need any statistical tests (control limits). Xbar charts are comparison charts.  They compare the within sources of variation (represented by the control limits) to the between sources (represented by the plotted averages) to determine where is the leverage (where should understanding the process continue?, which set of X's should you further investigate?). 

 

OK, let's say you have narrowed it down to something about helium source. Of course there is more than one variable changing within subgroup and one of them is something about the helium source (though you do not suggest what?).  Is there insufficient amount?  Is the concentration varying? Are there contaminants?  You want the charts to identify these "events" so you can act appropriately on the process.  The chart accomplished its purpose.  My guess would be that this happens to the helium more than once...it really isn't special, just special based on the time series or sampling plan you chose.  The action you take would look common cause like (change the process) to prevent this in the future (really not special). If you change the process as a result of what you learned, should you use the data from the "old" process?

 

I don't know what baseline data is, but I'll suggest you are using the 10 subgroups as a basis for comparison and decision making.  Why 10?  My guess is you think these 10 will provide a representative estimate of the variability in the process (of what some may call the common cause variability).  10 because you believe that in 10 you will see the influence of the X's you hypothesize will effect the Y's (they will change enough over the 10 subgroups...).  But, contained in these 10 is an unusual "data point" that exhibited itself by creating a large within subgroup range.  What caused this data point?  Was it measurement error? Or due to then other X's changing within subgroup? Certainly you would be confident that it is something to do with the X's changing within subgroup (Assignable to the within subgroup X's) as there are NO between subgroup sources captured in the Range statistic.  I'm a bit confused by your helium hypotheses, because if the helium is deteriorating over time it would likely show up on the Xbar chart, but I'll play along.  Perhaps the variability from the helium effect was actually present in all of the data points and only exaggerated in the one data point causing the OOC condition?  Or perhaps the helium was only part of the issue?  Should you use the rest of the data to compare the within and between sources?  Or should you develop a new sampling plan and get new data?   Hmmm, the way I understand it is you start with hypotheses (and predictions made about the effect of the hypotheses), get data to provide insight to your hypotheses (and to determine how good were your predictions) and then understand what was learned, drop/add/modify your hypotheses and run through the cycle again.  For your thought experiment:  These X's captured within subgroup (change at the within subgroup frequency) may affect the Y and there are these other X's that will change at the between subgroup frequency (using just 2 layers of a sampling tree).  I want to use sampling to help identify which sources are bigger (have greater leverage) because I want to know where to focus my efforts.  But before I compare the sources I want to ensure the basis for comparison (the within subgroup) is stable (comparisons to an unstable basis is irrational).  That is the purpose of the range chart.  If the within subgroup sources are not stable, investigate why and take appropriate action (which may be to change the process).  If they are stable, then proceed with the comparison. 

 

Of course it is your choice how you "manipulate" your data. And my guess is JMP will be happy to help you manipulate the data.  It seems to me your case is quite special.  I don't think JMP should change the "software", a common cause action, to react to a special cause event.  

Deming "Out of the Crisis", p.318:

Two kinds of mistakes:

1. Ascribe a variation or a mistake to a special cause when in fact the cause belongs to the system (common cause),

2. Ascribe a variation or a mistake to the system (common cause) when in fact the cause was special

 

"All models are wrong, some are useful" G.E.P. Box