cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
  • We’re improving the Learn JMP page, and want your feedback! Take the survey
Choose Language Hide Translation Bar

Recordings and discussion DOE Club Q2 2025

Next Sessions are: 

23 Sep | Q3: https://www.jmp.com/en/events/europe/users-groups/doe-club/2025/23-sep

25 Nov | Q4: https://www.jmp.com/en/events/europe/users-groups/doe-club/2025/25-nov

 

Question 1: How do I handle situations where my analysis shows no significant effects or interactions?

Chandra: Use Graphical Exploration Profiler, Contour Plots, and Interaction Plots in JMP can help spot trends that aren’t statistically significant yet may warrant further study.

Victor: Depends on the objective, prior knowledge and assumed model/complexity of the system under study + budget constraints

Definitive Screening Designs and OML designs (Mixed-Level Screening Designs) help investigating curvature (quadratic) effects early in the screening stage

Simon: if I already have existing DSD data and I want to augment them to a full RSM, how do I deal with the lack of randomization since I add on additional runs which I didnt measure together with the first data set?

Victor: Use blocking when augmenting the design

Phil: FitDSD will not work with missing runs unfortunately

Simon: can you show the augment using blocking in the software?

Victor: I added this default check in the Wish list: https://community.jmp.com/t5/JMP-Wish-List/DoE-Augmentation-Option-quot-Group-new-runs-into-separate...

maria_astals_0-1747044297626.png

 

Question 2: How do I choose between different DOE types (e.g., full factorial, fractional factorial, response surface)?

 

Question 3: What to do when you lose a run of DOE

 

Victor: You could also run the model (if possible) without this run, and see if the model is able to approximate the response from the last added run?  And if you must expand ranges of your factors because the ranges are too narrow, you can augment your design instead of starting from scratch again to benefit from the information already gathered.

David: The challenge is that I am already on the limit of runs and I can not go with one run less. So, I need to "augment" the design somehow. Any takes on whether you must use blocking? 

Kevin: A common pitfall I see, and fall into myself, is to try to limit the number of factors at the start, to make things manageable (i.e. reduce the number of runs). So then on gutfeel eliminate these, instead of screening them out based on DoE. 

Of course there is a limit, not to screen every thought that comes to mind, but it is often helpful to get a few experts in the meeting and brainstorm a starting set of factors.

Victor: You have a design with saturated model? You can still maybe use some tools adapted for saturated designs, like the Fit Two-Levels Screening platform (The Fit Two Level Screening Platform) to try creating a relevant model. 

And/Or as you mentioned, you could augment your incomplete first design and use Blocking for the new runs to prevent/detect any shift/variation between the first batch of experiments and the second one.

CK: can you also run one that was run previously to help account?

Simon: I don’t know if we already touched the topic, but I feel limited at final data analysis and model judgment. I learned at the JMP DoE course to analyse the ANOVA and lack of fit. What other and maybe better option do I have to judge my final model and maybe use the statistics of this model to plan a better experiment?

Victor: There are many model metrics to use for model comparison/validation depending on your objective: R²/R² adjusted, p-values, RMSE, Information criteria, ...

More discussion : https://community.jmp.com/t5/Discussions/Model-reduction/m-p/823421/highlight/true#M100286

 

Kevin: It is a tough question indeed David. Once idea to test/try, without knowing more about your case, but try to use a tool like Partial Least Squares that can handle missing data. Put your planned experiment in the X block, and then the Y predictors are what you measured, and leave the value you have missing in the Y column (or matrix if you have multiple responses). 

 PLS can make a prediction for you, though it might not be the best, it will be better than dropping out that run completely. 

 Also, what would have been your next step after this analysis, if you had the data point that is missing?

Phil: I'd just repeat the missing run like Kim says. It's not ideal because "noise" factors could have changed but I don't think there is a better option. Putting it in a separate block makes no sense. A block with 1 run is not useful. 

CK: centerpoints are good but could you repeat a previously successful run?

Kevin: could you try simulating your problem? and in the simulation see the effect of dropping a run, and bringing it back in a later block, simulated with, or without a block effect. Create a fake system, where you know the truth, simulate experimental data that replicates your scenario.

David: And then? how would you proceed?

Kevin: Then do the analysis on the simulated data and see how important this missed run is to the insights you would (or would not) have gotten. 

 Since you know the truth in the simulated system, you should see that back in your analysis of the simulated data.

 David: by "the truth" in the system, do you mean which effects are present?

Kevin: Yeah, if that is the intention of your experiments is to discover which effects are at play in the range you are operating in.

David: I would have fitted the data to get the model with the lowest AICc to describe the system

Kevin: So, it sounds like you are in an early phase: screening out factors, and learning about the system.

David: Exactly! That's why I feel that every run matters

Kevin: In that case; the question to check in the simulation: would the potential screening or knowledge learned (e.g. factor A make response 1 decrease), would that knowledge learned been wrong without that single missing run, or would you have gotten the same info? In other words, it is not an issue having that missing run.

 

Question 4: Making use of the Box Cox Transformation option within a DOE analysis

Victor: to know if a transformation is needed, you can check the residuals from your model first

Diana: Box Cox transformation: I recommend rounding to the nearest big number (still inside the confidence interval of good transforms) because it will be much easier to later explain to the audience. Much easer to say "it is a log transformation" than "it is to the power of -0.132342356".

Phil: Re box-cox transformation: JMP suggests a lambda number which is the power to which the response is raised as the transformation. 0 is equivalent to a log transform. 2 would be squaring the response. -1 is taking the inverse of the response. A log transform makes sense in some cases, e.g. where the response is a count of cell growth and therefore varies over orders of magnitude. I think it is hard to justify other transforms so I would avoid. The only exceptions are transforms that help when the response is on percent scale or on a 0 to 1 scale. Logit and logit pct transforms can be useful.

 

Question 5: What is the purpose of centre points, and determining appropriate factor ranges

Bill: DOEs are notoriously sparse and methods like tree methods or SVEM are good options for analysing your DOE data.

Victor: Topic of centre points also mentioned in previous DoE Club : https://community.jmp.com/t5/Design-of-Experiments-Club/Recordings-DOE-Club-Q1-2025/m-p/842727

 

Question 6: Data that doesn’t follow any DOE Principles.

Victor: You can use the Augment design platform.  This helps build trust with the people having already done experiments with OFAT

Chandra: To capture interactions and curvature after an OFAT study, use JMP’s Augment Design feature by specifying the same factors and a fuller model (main effects, interactions, quadratics). This generates additional runs that, when combined with your OFAT data, complete the design. Also adding a blocking factor to separate the original and new runs for better modelling. 

 

Comment from Caleb King:

I wanted to share something that is somewhat related to some things that were mentioned (Kim mentioned rounding factor levels to a nicer level and there's of course the discussion around using centerpoints to help with estimating pure error). I have a colleague at NASA who's run into situations where testers have a hard time achieving the specified design settings but can record the actual settings with high precision. There are methods available to use this situation to estimate pure error outside of using centre points. I've attached a presentation I gave 3 years ago on this topic (we had tried coming up with our own; turns out we were a few decades too late) that contains a summary and references for research in this area. Might be useful to you all. 

(see attachement)

0 REPLIES 0