cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Submit your abstract to the call for content for Discovery Summit Americas by April 23. Selected abstracts will be presented at Discovery Summit, Oct. 21- 24.
Discovery is online this week, April 16 and 18. Join us for these exciting interactive sessions.
Choose Language Hide Translation Bar
nurasbani
Level I

Normality Test

Normality test is based on the data residual but I confuse how to test the normality in JMP.

Should I calculate the residual first and then test the normality?

or just straight away test the raw data? 

Thanks

2 ACCEPTED SOLUTIONS

Accepted Solutions
louv
Staff (Retired)

Re: Normality Test

Peter_Bartell
Level VIII

Re: Normality Test

To add to my colleague Lou Valente's contribution above, if I'm interpreting your question as 'When in the analysis workflow sequence should I test for normality of (something)?' The answer is largely dependent on the practical questions you are trying to answer AND the analysis methods you are employing to answer them. For example, if your ultimate goal is to say, calculate process capability indices, some indices are fairly sensitive to the normality assumption so testing the raw data for normality BEFORE any analysis is generally considered a worthwhile few minutes spent.

 

On the other hand, lets say you are analyzing a designed experiment and (especially if you made this cardinal DOE execution mistake) you failed to run the experiment in random order. Then you may have had a nuisance/noise factor (machine warm up effect let's say as an example) whose influence you'd like to be able to detect if in fact it rose up to influence your results, then a time series plot of experimental execution order and the residuals (which means you've got to fit a model first) is warranted as your first line of defense in detecting the presence of the nuisance variable.

View solution in original post

2 REPLIES 2
louv
Staff (Retired)

Re: Normality Test

Peter_Bartell
Level VIII

Re: Normality Test

To add to my colleague Lou Valente's contribution above, if I'm interpreting your question as 'When in the analysis workflow sequence should I test for normality of (something)?' The answer is largely dependent on the practical questions you are trying to answer AND the analysis methods you are employing to answer them. For example, if your ultimate goal is to say, calculate process capability indices, some indices are fairly sensitive to the normality assumption so testing the raw data for normality BEFORE any analysis is generally considered a worthwhile few minutes spent.

 

On the other hand, lets say you are analyzing a designed experiment and (especially if you made this cardinal DOE execution mistake) you failed to run the experiment in random order. Then you may have had a nuisance/noise factor (machine warm up effect let's say as an example) whose influence you'd like to be able to detect if in fact it rose up to influence your results, then a time series plot of experimental execution order and the residuals (which means you've got to fit a model first) is warranted as your first line of defense in detecting the presence of the nuisance variable.