Hello JMP Community,
I have a set of data that is best fit with a lognormal distribution. I cannot seem to find what is going on behind the scenes for the Nonconformance statistics vs. the 95% confidence intervals of the distribution profiler. When I run the capability analysis, I get a nonconformance table. Observed % is obvious in that my actual data population did not have any values below the LCL. What statistics are used for calculating the Expected Overall %, I assume it is using the lognormal distribution and making a judgement as to how well it fits the data; is it a 3 sigma approach or something? How is it calculated, and what useful information does it provide as opposed to the confidence intervals of the distribution profiler.
The confidence intervals I think are interpreted in the following manner: since I only have a LCL, I would look at the upper 95% confidence interval to make a statement along the lines of 'with 95% confidence one can expect 0.91% of values to fall below the LCL.
Thank you for any help on this!
Greg