Hello,
I am creating a Custom Design with 4 factors (Factors A, B, C & D).
I have a question about a constraint that has to do with factors B and D.
- Factor B is actually the number of reactions done in sequence in the same experiment. This can be 1 reaction, 2, 3 or more reactions in sequence. I set the range as 1-3 (not higher because I am not interested at the moment in >3 reactions in sequence).
- Factor D is the time I wait in between the reactions in sequence. Thus, I only define Factor D when Factor B is >1. The range is 0.1 to 10 seconds. Factor D would be 0 if Factor B is 1. In other words, when the number of reactions done in sequence (factor B) is just 1, Factor D doesn't exist because it was just 1 reaction and thus, I don't need to wait any seconds since there is no other reaction in sequence performed.
How can I define this constraint in my Custom Design model?
Thank youuuu!!
PS: I use a Mac.
Hi @ADouyon,
Welcome to the Community !
Looking at your problem and options, here is what's recommended :
Based on your inputs, you can go into "Use Disallowed Combinations filter" ans select the range you want to exclude from your design (here, 1 =< B < 2 AND D > 0, see screenshot constraints for example). JMP should then offer you a design that adress this specific constraint (see screenshot Matrix and Experimental Space).
There may be better (and smarter) ways to do it, or more appropriate designs/factors configuration to think about this topic, but this should answer your needs and question.
I hope it will help you,
Hi @ADouyon,
I'm glad this first answer helped you.
Concerning your other questions :
If you can afford more runs, I would suggest to add some replicate runs, to increase the power of your main effects (increasing the ability to detect a significant main effect if it is indeed present). You may have a severe experimental budget constraint, but comparing several design with several sample sizes can help you choose the most optimal situation for your needs.
I hope this follow-up will help you as well,
Hi @ADouyon !
I'm glad the answers were helpful. I don't know how common this practice may be, but I personnaly prefer to spend more time comparing different designs and sample sizes than going to the lab and do the experiments quickly, to finally see that I may have forgotten a constraint, or that my design is not well suited enough for my needs.
--> To illustrate this, I really like this quote of Ronald Fisher : "To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of."
Different designs may lead to different interpretations of the results, depending on which terms are in the different models. For example, screening designs with main effects only may help filter relevant factors for the follow-up of a study, but may clearly lack precision and response(s) predictivity in presence of 2 factors interactions and/or quadratic effects (possible lack of fit).
You have several options to generate different designs and compare them:
For the moment, to compare different DoEs you have to create them in JMP and then go to "DoE", "Design Diagnostics", and then "Compare Designs". But ... Spoiler alert ! : In JMP 17, it will be a lot easier to compare several designs (see screenshots), as a "Design Explorer" platform will be included in the DoE creation, so no need to manually create them one by one : you can create several designs in seconds, and filter them based on your objectives and decision criteria.
And finally, take advantage of "iterative designs" and don't expect or try to solve all your questions with one (big) design. It may be more powerful and efficient to start small with a screening design for example, and then use Augmentation to search for more details, like interactions and possible non-linear effects. Finally, if your goal is to optimize something, you can once again augment your design to fit a full Response Surface Model. At each step, you gather more knowledge about your system, and make sure that the next augmented design will bring even more knowledge, without wasting experiments/runs, time and ressources.
"Augment design" platform is really a powerful (and often underestimated) tool to gather more data and gain more understanding and build more accurate models without losing any previous information.
I hope this answer will help you,
One additional comment to mention a great ressource about the sequential nature of experiments, by Prism Training & Consultancy : Article: The Sequential Nature of Classical Design of Experiments | Prism (prismtc.co.uk)
This article explains very well the process behind DoE, from creating and assessing the experimental scope for factors, to screening and then optimization. This is part of a serie of article that may have an interest for you.
Happy reading !
Hi @ADouyon,
It seems that you can save and load constraints only if it is done through the option "Specify Linear Constraints".
What you can do in your case, is
There might be a smarter and more automatic way to do this (via JSL scripting maybe ?), but this approach work (and you don't need to manually recreate your disallowed combinations).
Hope it will help you ! :)
Hello @ADouyon,
For the factors that can't take any decimal value, you have two options :
There might be other options to do it, so don't hesitate to create a new post and check for other users' inputs and responses.
At the end, choose the option you're the most confortable with.
Hope it helps you,
Hi again @ADouyon,
I hope these answers will help you (sorry for the long post),
PS: I have 9 runs in my original design with no replicate runs, this is because I didn't add the terms X1*X1 and X1*X2 in the model, but the general presentation and comparison of design may still be valid (you may have to verify to check it, if needed I can try to change the designs and provide a valid comparions and screenshots today).
Hi @ADouyon,
With the correct designs and models, the comparisons are a bit different:
There is a lot of information in the diagnostics outline. Some of it is more relevant at times. I am not saying that you should not look at everything, but we are generally somewhat selective in the information we use to compare the designs. Power, estimation efficiency, and correlation of estimates are more critical when I am screening factors or factor effects. On the other hand, prediction variance (profiled or integrated) is more acute when I am ready to optimize my factor settings based on model predictions.
Hi @ADouyon,
Concerning this new feature in JMP 17, it should be available soon (late October - beginning November), according to the JMP website : New in JMP 17 | Statistical Discovery Software from SAS
There is already a fascinating and brilliant white paper of this new functionality (Design Explorer) available here, which promises very interesting use cases in the selection of an optimal design: Choosing the Right Design - with an Assist from JMP's Design Explorer | JMP
1- Exactly ! Sorry for not being clear, this is exactly what I meant : I prefer to create several designs on my computer with JMP, and choose the most relevant one according to the experimental budget, goal and constraints, instead of going into the lab with the first design created and figuring out later that I may have forgotten some constraints or that my experiments are not all feasible/possible.
2- Sure ! When you have created several designs and the corresponding datatables for each design, you can go to DoE -> Design Diagnostics -> Compare Designs. There, you can select all the designs tables (max 5 designs in total, so 4 selected + the design from where you have clicked on "Compare Designs") and match the factors if they have different names in the tables (if they have the same names like in your screenshot (x1 and x1, ...), JMP will figure out that they are the same, so you don't need to match each factor individually). You should then have the same view as I had. More infos here : Evaluate Design Window (jmp.com)
3- The "Design Diagnostic" informations are values that need to be compared with other designs in order to see the strengths and weaknesses of each design. Each efficiency can go from 0 to 100. Different efficiencies are mentioned:
More infos here : Design Diagnostics (jmp.com)
4- Very good question, and I don't have a clear answer. This is presumably because of coordinate exchange algorithm (used for custom design) and random starting/generating points for the design. As the design is generated from random points in the design space, the optimal repartition of points in the experimental space may change from one design generation to another. Hence you can see some slight changes in the values when generating again the design. You can manually change these values to the closest value (here 2) without changing the optimality of your design too much, or try generate again the design, eventually by augmenting the number of random starts and/or the design search time (in the red triangle close to "Custom Design" you will find "Number of Starts" and "Design Search Time").
Hi again @ADouyon,
As a first overview, yes you're correct : power (=probability of detecting an effect if it is really active) should be as high as possible, prediction variance and fraction of design space plot as low as possible. But as answered by @Mark_Bailey, you'll compare in priority with other designs what you need from your experimental plan (as having everything high/perfect is often not possible or at the cost of a very large number of experiments):
You'll find some infos on Design Evaluation here (and in the following pages) : Design (jmp.com)
Yes, as my primary design for comparison was Custom-Design_0-replicates (with the lowest number of experiments and no replicates), most of the other designs performed better (either because they have replicate runs, so a similar or better estimation of noise/variance for the same number of experiments, or because they had a higher number of experiments (last 2 designs), hence improving all efficiencies) so the efficiencies of this simple design was worse compared to other designs (therefore the red values).
Changing the primary design in the comparison would have changed the relative efficiencies presented(and so the colors depending on the benefits (green) or drawbacks (red) of the design compared to others).
No, the constraints are saved in the design, so when you augment your design and replicate it (by selecting the right factors and response in the corresponding menu and then clicking on "OK"), your constraint(s) will be remembered and shown by JMP in the new window, with the different available options for augmenting your design (and especially for your case "Replicate"). Depending on the number of times you want to perform each run (asked by JMP after clicking on "Replicate") JMP will then create "copies" of your initial design runs.
Hi @ADouyon,
Welcome to the Community !
Looking at your problem and options, here is what's recommended :
Based on your inputs, you can go into "Use Disallowed Combinations filter" ans select the range you want to exclude from your design (here, 1 =< B < 2 AND D > 0, see screenshot constraints for example). JMP should then offer you a design that adress this specific constraint (see screenshot Matrix and Experimental Space).
There may be better (and smarter) ways to do it, or more appropriate designs/factors configuration to think about this topic, but this should answer your needs and question.
I hope it will help you,
Hello @Victor_G ,
Thank you very much for your quick response! This was very helpful!
I have a couple follow up questions:
1- When Factor B (the discrete numeric factor that is the number of biological reactions) is 2 or higher (3), then the Factor D cannot be zero. How can I add this second constraint in the model?
I tried it but I am not sure I did it right (screenshots attached).
2- The model automatically added the quadratic term B*B with estimability = if possible. Do you think I should keep this term? (I am trying to make this first design not too large)
Thank you!!
Hi @ADouyon,
I'm glad this first answer helped you.
Concerning your other questions :
If you can afford more runs, I would suggest to add some replicate runs, to increase the power of your main effects (increasing the ability to detect a significant main effect if it is indeed present). You may have a severe experimental budget constraint, but comparing several design with several sample sizes can help you choose the most optimal situation for your needs.
I hope this follow-up will help you as well,
Thank you so much, @Victor_G !! Much appreciate it. Your answers are super helpful!!
I took a couple courses on JMP DOE and read that we shouldn't invest >25% of our efforts on the first experiment. For us, time is the most limited resource we have. I was wondering, is it common to generate more than one design with different sample sizes for comparison, like you mentioned? Shouldn't the designs give the same result at the end? would you mind expanding a little more on that?
Thank you!!!
Best,
Hi @ADouyon !
I'm glad the answers were helpful. I don't know how common this practice may be, but I personnaly prefer to spend more time comparing different designs and sample sizes than going to the lab and do the experiments quickly, to finally see that I may have forgotten a constraint, or that my design is not well suited enough for my needs.
--> To illustrate this, I really like this quote of Ronald Fisher : "To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of."
Different designs may lead to different interpretations of the results, depending on which terms are in the different models. For example, screening designs with main effects only may help filter relevant factors for the follow-up of a study, but may clearly lack precision and response(s) predictivity in presence of 2 factors interactions and/or quadratic effects (possible lack of fit).
You have several options to generate different designs and compare them:
For the moment, to compare different DoEs you have to create them in JMP and then go to "DoE", "Design Diagnostics", and then "Compare Designs". But ... Spoiler alert ! : In JMP 17, it will be a lot easier to compare several designs (see screenshots), as a "Design Explorer" platform will be included in the DoE creation, so no need to manually create them one by one : you can create several designs in seconds, and filter them based on your objectives and decision criteria.
And finally, take advantage of "iterative designs" and don't expect or try to solve all your questions with one (big) design. It may be more powerful and efficient to start small with a screening design for example, and then use Augmentation to search for more details, like interactions and possible non-linear effects. Finally, if your goal is to optimize something, you can once again augment your design to fit a full Response Surface Model. At each step, you gather more knowledge about your system, and make sure that the next augmented design will bring even more knowledge, without wasting experiments/runs, time and ressources.
"Augment design" platform is really a powerful (and often underestimated) tool to gather more data and gain more understanding and build more accurate models without losing any previous information.
I hope this answer will help you,
One additional comment to mention a great ressource about the sequential nature of experiments, by Prism Training & Consultancy : Article: The Sequential Nature of Classical Design of Experiments | Prism (prismtc.co.uk)
This article explains very well the process behind DoE, from creating and assessing the experimental scope for factors, to screening and then optimization. This is part of a serie of article that may have an interest for you.
Happy reading !
Thanks once again, Victor! :)
Thank you incredibly, @Victor_G! Very much appreciated! Your comments are extremely useful and clear, thank you!
Thanks a lot for your kind comments and feedback @ADouyon !
I'm happy if my answers were helpful and clear enough ! :)