Subscribe Bookmark RSS Feed

Screening a Discrete Numeric Design


Community Trekker


Feb 11, 2015

I have created a split-plot RSM DOE with discrete numeric terms. AMT delta (x1) is a HTC factor with 5 discrete numeric levels and alpha (x2) is an ETC factor with 7 discrete numeric levels. In order to come up with a surrogate based optimization process, I am using historical data to make a DOE and response surface. However, in the future I hope to apply this process when the data shape is unknown.

My question is based on screening my design to determine which terms in my model are significant. In the description for the screening analysis function, it says it is only for 2 level or continuous factors. Is there any automated way to include higher order polynomial terms in the screening function? The historical data I am working with now is nonlinear and I expect future data I will generate will be nonlinear as well. So I'd like to be able to start with the maximum polynomial possible given the number of numeric levels of each factor. Also, I'd like to include all interactions terms of higher powers.

I performed my own screening analysis manually but including all the terms in a fit model and then using backwards elimination of insignificant terms one at a time. Does JMP have an automated way to do this?

Or should I be going about fitting my model in a completely different way?






Jun 23, 2011


This is quite a loaded question which probably is difficult to answer succinctly. I will say for me that the KISS (keep it simple) principle applies. A split-plot with an RSM model (main effects, two-way interactions and polynomial terms) seems just fine for the file that you shared.  Attached is the file with the RSM model scripted.

Hope this helps.



Community Trekker


Feb 11, 2015


Thanks for your reply, once again. My main concern is capturing the true physics that are going on in my tests, but as the quote from George Box explains, there is no way to do this without taking in infinite number of points in my design space which is pretty much the opposite of what DOE is trying to achieve.

I understand the potential downfalls of over-fitting the data with higher order polynomials. I wanted to attempt higher order models to try and capture the whole design space accurately with a single model. However, the way to go about this is to find a 1st or 2nd order model that fits the data, determine if there are any regions that the errors of the fit are beyond a certain tolerance or if there are regions that aren't accurately captured using engineering historical knowledge (such as stall of an airfoil), and then re-run the experiment with more data points around the region of concern.

Another option could be to split up the overall design space into different experimental designs and fit different models to each. So for an airfoil that would be a linear model at lower angles of attack and something nonlinear near the stall region.

Are these better routes than trying to keep increasing the order of a model?