In JMP, I've got Pass/Fail data that is 0 failures out of N samples. I analyze the data and have JMP generate the confidence intervals, so for example, 0 failures out of 20 samples will identify a lower confidence interval of 83.9% at a 95% confidence level.
Question: When there are zero failures and/or the Upper CI is 1, does that Lower CI value really represent half of my Alpha?
i.e. in my example, is 83.9% actually the 97.5% confidence level? And therefore, 88.1% is the true 95% confidence level (JMP's calculation of the 90% level)
Or is JMP smart/dumb enough to know that when my distribution butts up against 0 or 1, a single sided confidence interval is appropriate?