cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Check out the JMP® Marketplace featured Capability Explorer add-in
Choose Language Hide Translation Bar
ZenCar
Level I

Definitive Screen Design (DSD) Model confirmation

Hi all,

 

I designed a DSD with the following info:

- Design: 7 continuous and 1 categorical factors, 22 run

- Design Evaluation on power and correlation: very good

- Fit model via Fit Definitive Screening: excellent

  • RMSE = 0.2803, RSq =0.997, PValue<0.001)
  • Lack of Fit: no (F Ration 191, Prob > F = 0.0561)
  • Residual and Studentized Residuals are all good

- Model structure: 

  • Contains 5 main, 4 quadratic and 1 interaction factors

- Model confirmation: try a few runs (points) around to Max desirability

  • some agree with prediction very well
  • some show large differences

Due to resource constrain, we can run 5 to 10 confirmation runs.

 

Please advise:

- Potential causes of the difference say underfit or overfit?

- Method or procedure to improved/optimized the model

 

Thanks.

 

12 REPLIES 12
statman
Super User

Re: Definitive Screen Design (DSD) Model confirmation

This will be very difficult for us to help with.  Realize the statistics you report have nothing to do with whether the model works into the future.  Extrapolation is an engineering or scientific one and it depends greatly on how representative your experiment was of the future conditions.  The definitive screening design is a strategy to handle design factors, not the noise (factors you are unwilling to control or manage).  You say nothing about how you handled the noise.  If the noise factors are held constant, then your inference space is too small and your model will likely only apply to the data in hand. For future experimentation, my advice is to spend as much time considering what you will do with the factors not included in the design structure (noise) as the ones you are.

 

 

"All models are wrong, some are useful" G.E.P. Box
ZenCar
Level I

Re: Definitive Screen Design (DSD) Model confirmation

Hi Statman,

 

The DOE was based on simulation software results. We found one factor level (-1,0,+1) did capture the full range of the representing variable, but physically (0, 0.5, 1) or (-1, -0.5, 0) should be the effective range for target response. So we can view this factor as a noise factor and use a same factor level in confirmation run.

 

Based on the additional info above, any advice please?

 

Thanks.

 

 

statman
Super User

Re: Definitive Screen Design (DSD) Model confirmation

Not sure I can help you.  Are you saying the results from running the experiments in the simulation software are not repeatable in the simulation software?  Or are you saying the results of the experiment in the simulation software are not repeating in "real life"?

 

I think there may be confusions to what noise is.  It goes by a number of different names: Nuisance, background, et. al.  Examples include: ambient environmental conditions, wear or degradation of materials and equipment, operator technique, in some case measurement error, lot-to-lot variation of materials, etc.  These variables are not typically manipulated in an experiment and are either held constant (bad idea) or vary during the execution of the experiment.  These need to be representative of future "conditions" for your experimental results to be applicable in the future (or for your model to predict the response variables in the future).

 

If you are using simulation software, the algorithm is already contained in the software.  Do you not know it?  I have no idea how your simulation software "simulates" noise.

"All models are wrong, some are useful" G.E.P. Box
ZenCar
Level I

Re: Definitive Screen Design (DSD) Model confirmation

Hi Statman,

 

Thanks for your patience.

 

I am familiar with the noise concept. I started by using Taguchi Method for many years. So in my case, we don't need to worry about noises. 

 

In short, the DOE was based on the results from running simulation software, all looks good except model works poorly with some confirmation data.

 

To figure out what may cause the not so good confirmations, hare are a few things I trying now:

1) Instead of using model generated by Run Model straightly right after Fit Definitive Screening, I used Stepwise Regression to pick terms using AICc and BIC criteria. Then I fit the selected terms with Standard Least Squares. I got some improvement.

2. I can also fit the above selected terms with addition runs assuming the model is underfit. Question: how do decided if a model is underfit or overfit? What is the solution is it is overfit?

3. Should I even try nonlinear regression fit?

 

Thanks.

 

 

 

 

 

statman
Super User

Re: Definitive Screen Design (DSD) Model confirmation

I'm happy to help if I can, but you did not answer my first question?  If you are running simulations, why fractionate?  Unless the computing time is too long?  The experiment was run by simulation software, were the confirmation runs also run by the simulation software?

 

I'm not sure what Taguchi Method has to do with this discussion?  He certainly wasn't the first to identify the importance of noise in experimental situations (See Fisher).  If you are pointing to the inner and outer array, this method was first discussed in the 1950's with Cox and cross-product arrays.  I don't understand what you mean "we don't need to worry about noise"?  Every variable that could possibly vary has been studied?

 

There are many methods/approaches to model fitting.  There is the additive approach which is what stepwise does and there is the subtractive approach which starts saturated and removes terms.  Different approaches for different situations. There are a number of statistics that can provide help, but no one statistic gives you a definitive answer.  My first advice is to use engineering and science to determine whether the factors an levels suggested by the model make sense, useful statistics include: RMSE, RSquare-RSquare Adjusted delta (some use predicted RSquares), coefficients, p-values, residuals (many plots), VIF's, etc.

 

 

 

"All models are wrong, some are useful" G.E.P. Box
ZenCar
Level I

Re: Definitive Screen Design (DSD) Model confirmation

Hi Statman,

 

For each run, we need to manually build a CAD model which takes time before running the simulation which also takes time. At beginning, it would take too much resources if run a FF DOE. The first run DSD helped to cut it down to 8 terms (5 ME, 4 quadratics, and 1Interaction). Maybe I can further trim it down to run a FF on 4 MEs.

 

My understanding of noise was from Taguchi Method. I will take yours from this point on.

 

We did spend time at beginning to identify potential significant variable so as to decide factor levels. 

 

The DSD was designed to have good evaluation of power and alias. We then studied the results of the DOE and determined it identified the correct effect significances.

 

As I mentioned, I Run Model directly after Fit Definitive Screening and got "excellent" model. The Rsquare = 99%. But the confirmation runs were not very good.

 

I started to trimmed down the terms (say using 4 MEs, and quadratic and interactions terms of 1 ME). The result has improved a lot. This seems to me the model was overfit. I will look into the directions you pointed out for improvement. If it is not efficient, I may pursue a FF with 4 MEs (3 levels^4 = 81 run?).

 

Thank you very much for your inputs.

 

 

statman
Super User

Re: Definitive Screen Design (DSD) Model confirmation

By the way, I am not condemning Dr. Taguchi's ideas.  I had the privilege of teaching with him in Tokyo many years ago.  I found his thoughts on application of statistical methods quite interesting, particularly his ideas on creating appropriate response variables.  A very engineering approach!

 

As I have already commented, I'm still not sure I understand your situation.  I do know that having an RSquare of 99% means virtually nothing other than the model you used explains ~99% of the variation IN THE DATA SET IN HAND.  It has little to do with whether your model will be useful for prediction.

"All models are wrong, some are useful" G.E.P. Box
ZenCar
Level I

Re: Definitive Screen Design (DSD) Model confirmation

Hi Statman,

 

Agree. I used to be a fan of Taguchi, and envy friends who had direct interaction with him. The method has profound contribution to improve the quality of many many industries.

 

Thank you for your help. Please feel free to continue to input.

Re: Definitive Screen Design (DSD) Model confirmation

I understand that your system is a computer simulation. You are designing a computer experiment, presumably to fit a surrogate. Your simulation does not include any stochastic element, so the same input values lead to the same outputs. You are not screening inputs, for which the DSD is intended. You know the inputs because you can examine the simulation. I presume that the simulation is impractical (e.g., computation takes to long for smooth graphics) so you want to use a good surrogate.

 

The DSD and several other design platforms are intended for physical experiments in which the response includes stochastic components. You should have tried the space-filling designs. They are meant to be used with computer experiments. Also, they support fitting the Gaussian Process model, which is typically a much better surrogate than the linear regression model.

 

So I think that the bias in the linear regression model (i.e., lack of confirmation) is because of a non-linearity in the response that cannot be modeled well with a polynomial function.

 

If you combine the original DSD results with the new confirmation results and fit the model again, does the fit and the predicted response improve?