As discussed many times with JMP representative over the last 10 years, a good improvement would be that the **prediction profiler** allows more model '**predictive checks**', e.g comparing data to prediction intervals and **visualizing the prediction intervals** on the prediction profiler (because, really, to get an idea of the prediction of tomorrow's outcome for my process, the confidence interval usefulness is close to 0).

Similarly, when computing** p(defect) in the simulator**, this is a little sad to use a Normal approximation of the predictive distribution (i.e. normal(mean, RMSE) ) instead of the **more correct form of a Student accounting for RSME but also for design uncertainty. **This latter is completely forgotten now, and which makes it statistically wrong, especially with small d.f.. Unfortunately, small d.f. is the norm, when using DoE.

(I mean, the closed-form solution even exists for multivariate multiple linear regression (not much more computations), and then, instead of writing manually a correlation matrix, we could use the one computed from the data, right ?)...