I am currently looking forward a way to automatically outliers from a distribution.
I was wondering if anyone would have a script executing the Dixon's Q Test on a JMP data table.
Thanks in advance,
I have a script almost finished. I am verifying its operation and calculations, and adding more lookup tables. I will submit it to the File Exchange in the JMP User Community as soon as possible. I will also submit a script that I finished for the Grubb's test.
Thank you for the script for Dixon's test, which is very helpful (I wish the ability to run the Dixon test would become a standard feature of JMP). I was looking at this script and unless I'm missing something, I think there might two issues with it.
To demonstrate them I used the two-tailed test (first option on the list) with three numbers as in the enclosed example.
The first issue is that the critical values in the script seem to refer to the one-tailed test, or actually CL of 90% when choosing the 0.05 option. Please refer to Rorabacher for the list of critical values, especially for the case of 95% CL.
In the enclosed example, the value 11.66 is clearly an outlier, which should be detected if using either the one-tailed or the two-tailed test. The critical value that the script shows is Q0=1 where in fact it should be 0.971 and when compared to the test result Q=0.98 should indicate on an outlier.
Again, thank you so much. I will appreciate your feedback.
Thank you for using our script. I am glad that it is useful. One of the many reasons for the File Exchange is to provide useful tools that supplement JMP. If you want this outlier test built into JMP, then visit the JMP Wish List here in the JMP Community and request it. You might search for it first (I have not) and if you find such a request already submitted, then vote for it to boost its chances.
I was not aware of the paper you cited (Rorabacher 1991) so it was not the basis for my script. I used one of the sources he cited, though. I used Bartlett and Lewis (1994), albeit a newer edition than Rorabacher used. I agree with him that such tests are important and can be useful. I wish that more scientists were taught what they mean and how to properly use them. I generally see behavior that some have called "p-value fishing" where tests are applied until one of them is "statistically significant." This behavior is especially rampant in the of outlier tests.
The tests have been developed and re-visited for many decades. Improvements in computing techniques and statistical theory benefit older methods so we all benefit. The tests are objectively based but they rely on subjective choices:
As to your specific issues, you said, "The first issue is that the critical values in the script seem to refer to the one-tailed test, or actually CL of 90% when choosing the 0.05 option. Please refer to Rorabacher for the list of critical values, especially for the case of 95% CL."
I carefully reviewed the script and confirmed that it is using the correct set of critical values for both 95% and 99% confidence. I further tested published examples that agreed with the result from the script.
Also, note that only the first test provided by the script ("Upper or Lower Outlier") is correctly applied as a two-sided test. All the other tests are one-sided, so doubling alpha for these tests or using a two-sided confidence interval is non-sense. Rorabacher seems to make this mistake on page 141. It is important to be clear about the null and alternative hypotheses in any test or construction of any confidence interval.
You said, "In the enclosed example, the value 11.66 is clearly an outlier."
I wish it were that simple and clear. In my years as an analytical chemist, I saw many cases in which I wanted to call an observation an outlier, but based on only 3 observations, such a call was unwarranted. How do I know that the two negative values are not the outliers? Random variation such as seen in a sample from a population can provide surprising but still valid observations. Prior knowledge about the population could be used to augment the test but these tests do not use information external to the sample.
The use of a critical value also creates the illusion that an observation is an outlier or it is not an outlier. The fact is that the greater the distance used in any of these tests, the stronger the evidence. It is a continuous and non-linear change in belief.
You said, "The critical value that the script shows is Q0=1 where in fact it should be 0.971 and when compared to the test result Q=0.98 should indicate on an outlier."
Rorabacher points out how different critical values have appeared over the decades, so I would not say that, "in fact, it should be 0.971." Each author presented an honest effort to develop critical values that were based on the subjective choices such as those mentioned above. That value 0.971 represents another subjective choice about these tests. I stand by my choice of the source of the q0 values that I use in the script. But you are welcome to modify the script if you prefer Rorabacher's values instead. One of the beauties of a script is that you do not have to wait for JMP developers to add or enhance the feature.
There are no labels assigned to this post.