Good day: I have a question about the Stability Analysis platform in JMP.

For those of you who are not familiar with Stability Analysis (in the *Analyze/Reliability and *

* Survival/Degradation* menu) in the Pharmaceutical Industry, in its basic form it is ANCOVA with one

categorical factor (batch) and one covariate (time). From a model selection perspective, and starting with the

full model, the idea is to test the interaction term (batch*time)…if the p-value is <0.25, then the final model

used to estimate shelf life is “Model 1” (includes the terms batch, time, and batch*time) as described below.

Alpha is set at 0.25 (a regulatory requirement) for batch related terms to increase power to detect differences

between batches.

In “help” I see the following.

“When Model 1 (different slopes and different intercepts) is used for estimating the expiration date, the MSE (mean squared error) is not pooled across batches. Prediction intervals are computed for each batch using individual mean squared errors, and the interval that crosses the specification limit first is used to estimate the expiration date.” My question is this (I posed this question to JMP as well); for Model 1 (as defined above) why not use the pooled MSE?

JMP's response was: "Our development team chose to match Chow’s codes. I also believe this method was implemented to maintain consistency with results from a SAS macro called STAB that was written by FDA researchers."

In JMP help, they reference: Chow, Shein-Chung. (2007), Statistical Design and Analysis of Stability Studies.

I do not have a copy of the referenced text, and the FDA macros mentioned are decades old now.

Assuming homogeneity of variance, can anyone enlighten me on why the pooled MSE is not appropriate in this case (but it is in other two models considered: Common Intercept/Common Slope, Different Intercepts/Common Slope)?

I'm using JMP v13 (and it was this way in previous versions as well).

Kind Regards