<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW? in Discussions</title>
    <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/564660#M77697</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/44170"&gt;@ADouyon&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Great, I think it makes more sense to first be sure about main effects and interactions, and then augment the design to optimize variance prediction.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1- If you can run your experiments by batch of 12 runs, it is safer and recommended to use blocking. In your case, a random block like you did seems a good choice, as you're not particularly interested in knowing the difference between the two runs, but more on the variability in the two runs.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2- The problem with the "Augment Design" platform is that you will probably lose your random blocking (because even if you put it in the factors before launching the platform, it won't be taken into consideration after, and you'll only have access to "Augment" option, not the "Replicate" one). Two options to be able to augment your design and replicate it :&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;You just copy your first 24 rows, paste it after, and change the random block levels (1 becomes 3, 2 becomes 4).&lt;/LI&gt;&lt;LI&gt;You directly generate a 48 runs design, by specifying in the design generation that you want 24 replicate runs, by group of 12 experiments. You'll end up with design tv8 (attached), and from there, you can directly run the 48 experiments of this design if you have enough experimental budget.&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;The two approaches are similar, even if the second one (tv8 compared to augmented tv7 by copy/paste) is more efficient (better repartition of experimental runs in the different blocks, see screenshots).&lt;/P&gt;</description>
    <pubDate>Fri, 04 Nov 2022 08:47:25 GMT</pubDate>
    <dc:creator>Victor_G</dc:creator>
    <dc:date>2022-11-04T08:47:25Z</dc:date>
    <item>
      <title>Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/559266#M77273</link>
      <description>&lt;P&gt;Hi,&lt;BR /&gt;&lt;BR /&gt;I am working on Custom DoE (JMP16). I have 4 factors (1 continuous and 3 discrete num, with constraints). I have 1 response. I know these 4 factors are important and have interactions. My &lt;U&gt;goal is to optimize the response to find the optimal factor levels&lt;/U&gt; that result in the highest efficiency possible. I included all quadratic and interaction effects.&lt;BR /&gt;&lt;BR /&gt;I am using the I-optimality since this is not a screening design.&lt;BR /&gt;- The first design (v5) has no replicates. It has 18 runs total.&lt;BR /&gt;- The second design (v6) has duplicates. It has a total of 36 runs (I augmented it).&lt;BR /&gt;- The third design (v7) has triplicates. It has a total of 54 runs (I augmented it).&lt;BR /&gt;&lt;BR /&gt;For this experiment that I plan to do, it is easier for me to add duplicates or triplicates (of the same conditions) once in the lab. That's why I don't mind if I have more runs, as long as these are of the same conditions (duplicates or triplicates), because I am able to run them all together as long as these are replicates of the same initial 18 runs.&amp;nbsp;&lt;BR /&gt;[If instead I was to simply use the "add replicate runs" feature, then JMP adds way too many more runs with too many different conditions for me to test (resources limitation). For example, when adding just 4 replicate runs, JMP generates a design with a default total of runs =27, which is too many different conditions to test...That's why &lt;STRONG&gt;I decided to replicate my entire design instead using the augment feature&lt;/STRONG&gt;.]&lt;BR /&gt;&lt;BR /&gt;- I am comparing designs v5, v6 and v7 (attached). &lt;STRONG&gt;However, my "Fraction of Design Space" plot is blank. Why is it blank? or is the data very small that I can't see it?&lt;/STRONG&gt;&lt;BR /&gt;- The "Power Analysis" and "Power Plot" look pretty good for v6 and v7. But, I am interested in optimizing the response anyway (not in the main effects).&lt;BR /&gt;&lt;STRONG&gt;- However, the D/G/A/I- efficiencies are terribly POOR and all are the SAME. Are these all the same because design v6 is a duplicate of design v5, and design v7 is a triplicate of design v5? and why are these values so LOW (0.5 and 0.3)?&amp;nbsp;Are my designs bad?&lt;BR /&gt;&lt;BR /&gt;&lt;/STRONG&gt;I tried to look this up but, all I found on the DOE guide is:&lt;BR /&gt;&lt;SPAN&gt;&lt;EM&gt;"Relative efficiency values that exceed 1 indicate that the reference design is preferable for the given measure. Values less than 1 indicate that the design being compared to the reference design is preferable. The 16-run design has lower efficiency than the other two designs across all metrics, indicating that the larger designs are preferable."&lt;/EM&gt;&lt;BR /&gt;&lt;/SPAN&gt;But, this didn't help me much.&lt;BR /&gt;&lt;STRONG&gt;&lt;BR /&gt;Thank you in advance!!&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 08 Jun 2023 21:12:33 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/559266#M77273</guid>
      <dc:creator>ADouyon</dc:creator>
      <dc:date>2023-06-08T21:12:33Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/559309#M77284</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/44170"&gt;@ADouyon&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For your first question, the fraction of design space plot is blank for each individual design, which is weird. Looking at your disallowed combinations script, there may be a problem, as the name of your factors in the datatable are not the same in your "Disallowed combinations" script (so JMP may be lost in interpreting what is "Pulses" and which column it is for example).&lt;/P&gt;&lt;P&gt;When I change the "Disallowed combinations" script with anonymized names for columns (X1, X2, X3, X4), I'm able to see fraction of design space plot for each individual designs and it appears in the comparison of designs (see screenshot). The corrected disallowed combinations script is here (make sure that I didn't do any mistake inversing X1/X2/X4 in the script):&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&lt;CODE class=" language-jsl"&gt;X2 == 1 &amp;amp; (X4 &amp;gt;= 0.0595238095238095 &amp;amp; X4 &amp;lt;= 10) | (
X2 == 2 | X2 == 3) &amp;amp; X4 == 0&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;For your second question with the efficiency comparison of designs, keep in mind that your reference in this platform based on your screenshots is the v5 (so the smallest design with 18 runs). So JMP will calculate the &lt;STRONG&gt;relative&lt;/STRONG&gt; efficiencies of v5 compared to v6 and v7 (as fractions : v5 efficiency divided by V6 or v7 efficiency for all efficiencies criteria). Based on your screenshot, you can see that v6 will be 2 times more efficient than v5 for D/G/A/I efficiencies criteria when you add 18 runs, and v7 will be 3 times more efficient than v5 for D/G/A/I efficiencies criteria when you add 36 runs (because relative efficiencies of v5 are 0,333 compared to v7).&lt;BR /&gt;If you want to see the comparison based on your "medium" design with 36 runs, launch the Design comparison platform from the datatable of your 36 runs design. You should have the relative comparison as shown in my attached screenshot (and you'll indeed see that v6 has relative efficiencies values 2 times higher than for v5).&lt;/P&gt;</description>
      <pubDate>Sat, 22 Oct 2022 10:33:45 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/559309#M77284</guid>
      <dc:creator>Victor_G</dc:creator>
      <dc:date>2022-10-22T10:33:45Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/559625#M77316</link>
      <description>&lt;P&gt;The scale for the prediction variance used by the FDS plot looks wrong. I suspect all the values are greater than the maximum on the scale. Did you try to change the scale?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Power is always lower for higher-order terms. Also, power is not that important when you are not using the analysis to determine which effects are important. The prediction variance is more important when using the model to optimize factor settings.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The efficiencies are normalized for the number of runs, so replicating a design will not change the efficiency. Efficiency is not helpful as an absolute metric. It informs you about the current design relative to a theoretical design, which is often impossible. Use it as a comparative measure to compare two or more designs. So the design with efficiency = 1/2 is better than the design with efficiency = 1/3.&lt;/P&gt;</description>
      <pubDate>Mon, 24 Oct 2022 14:56:27 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/559625#M77316</guid>
      <dc:creator>Mark_Bailey</dc:creator>
      <dc:date>2022-10-24T14:56:27Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/559648#M77324</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/5358"&gt;@Mark_Bailey&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The screenshots provided are taken from the platform "Compare designs", so the efficiencies are "already" relative, since they are computed in comparison to the "worst" design here (v5 with 18 runs).&amp;nbsp;&lt;/P&gt;&lt;P&gt;So efficiencies of v5 vs. v6 (36 runs) are 0,5 (efficiencies v5/v6 = 0,5) which should mean that efficiencies of v6 are 2 times higher than those of v5 ?&lt;BR /&gt;And if the logic is right, then efficiencies of v5 vs. v7 (54 runs) being 0,333 that means efficiencies of v7 are 3 times higher than those of v5, so v7 is the best design (comparatively to others), not v6 which has relative design efficiencies equal to 0,5 compared to v5.&lt;/P&gt;&lt;P&gt;Or did I misunderstood something ?&lt;/P&gt;</description>
      <pubDate>Mon, 24 Oct 2022 16:52:05 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/559648#M77324</guid>
      <dc:creator>Victor_G</dc:creator>
      <dc:date>2022-10-24T16:52:05Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/559750#M77332</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/11568"&gt;@Victor_G&lt;/a&gt;&amp;nbsp;and &lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/5358"&gt;@Mark_Bailey&lt;/a&gt;! Thank you for your help! Much appreciated!&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/5358"&gt;@Mark_Bailey&lt;/a&gt;, thank you for the insight! Similar to &lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/11568"&gt;@Victor_G&lt;/a&gt;,&amp;nbsp;I also wonder if this 0.5 and ~0.333 (efficiency) ratio that we see when comparing to a duplicate or triplicate design, respectively, has any meaning then?&lt;BR /&gt;&lt;BR /&gt;1- Thank you &lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/11568"&gt;@Victor_G&lt;/a&gt;&amp;nbsp;for catching that error in the constraints! I wonder, can I actually edit the constraints after I already generated the DOE table, or do I have to start over?&lt;BR /&gt;On the same note, after I already generated a DOE Table, how can I check/see the ‘estimability’ of the various terms (main effects, quadratic terms, etc)?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2- I also noticed another weird thing happening with my (now updated) design (i_tv5 attached). I specified the &lt;STRONG&gt;X4*X4 term as “if possible”&lt;/STRONG&gt; in the ‘estimability’ selection.&lt;BR /&gt;After I generate the DOE Table, when I then go and check the ‘&amp;gt;Model’ (on the left panel), I see all terms I included prior to generating the design. However, if I go to ‘&amp;gt; Evaluate Design’ (also on the left panel), I notice that the term X4*X4 disappeared. It doesn’t show there.&amp;nbsp;Did JMP remove that term from the design because there aren’t enough runs to analyze that x4*x4 term (the total #runs is 24)?&lt;BR /&gt;If I change the estimability of this term to ‘necessary’, then the design becomes worst (attached): a) the prediction variance is higher, b) the power analysis is all 0s and c) the design efficiencies are 10x lower.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This issue with the x4*x4 term is giving me difficulties when comparing designs. I was trying to compare this design with a duplicated design (attached). The error message shows up again ☹ &lt;EM&gt;“Model for primary design cannot be fit by all designs. Removing inestimable model terms.”&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;How can I solve this? Do I just ignore this error messageif, JMP will ignore it anyways? I believe this X4*X4 term may not be significant (but not 100% sure).&amp;nbsp;Due to resources limitation, I cannot do more than 24 runs.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;What would you recommend if I can only do 24 runs, and at the same time I think this term may not be very significant (but never tested it before)?&lt;BR /&gt;&lt;/STRONG&gt;&lt;BR /&gt;3- On the other hand; &lt;STRONG&gt;what do the values under the “Design and Anticipated Responses” &lt;/STRONG&gt;(see screenshot) &lt;STRONG&gt;mean?&lt;/STRONG&gt; Those values make no sense to me at all. The goal of this design is to maximize the response. I set the lower limit to 10. The values I see in this “Design and Anticipated Responses” section are &amp;lt;10, or 0 or some are negative values. &lt;STRONG&gt;Is this a concern?&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;4- One last quick question: &lt;STRONG&gt;When I augment a design, does the optimality criterion stay in the augmented design?&lt;/STRONG&gt; I noticed that when I duplicate a design by augmenting it, the duplicated design seems to go back to the ‘recommended’ optimality criterion…. But, what is the recommended by JMP, is it D- or I-optimality, what is it?&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;&lt;U&gt;Thank you all for your help!!&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 24 Oct 2022 23:39:24 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/559750#M77332</guid>
      <dc:creator>ADouyon</dc:creator>
      <dc:date>2022-10-24T23:39:24Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/559853#M77337</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/44170"&gt;@ADouyon&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;First, I just checked the JMP help section, and when using the platform "Compare designs", the explanation I gave about the efficiencies ratio is correct. See&amp;nbsp;&lt;A href="https://www.jmp.com/support/help/en/16.2/#page/jmp/designs-of-different-run-sizes.shtml#" target="_blank" rel="noopener"&gt;Designs of Different Run Sizes (jmp.com)&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Concerning your new questions :&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Since constraints are part of the design creation (they are taken into account to know which part of the experimental space is restricted/forbidden, and the coordinate exchange algorithm try to put the points in this new space in the most optimal way), you can't change the constraints after having generated your design. Else, you may find points that are in the newly forbidden/restricted part of your experimental space, so these runs won't be feasible. I'm afraid you'll have to start over. If you're talking about the constraints and the problem I have identified with the names of the variables, you can edit the script, but please check that the variable names are correct in my edit to be sure that the design table that you already have respect the constraints.&lt;BR /&gt;If you click on the script "DoE Dialog", you'll see a new window opened, with all the informations you had just before clicking the "Make Design", so you'll be able to check what were the estimability of the terms in your design (see screenshot).&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;&lt;LI&gt;As this X4*X4 is not a "necessary" estimable term, you won't see it in the "Evaluate Design" platform, because the&amp;nbsp;&lt;SPAN&gt;number of runs is smaller than the total number of parameters that you would like to estimate (necessary + if possible).&lt;/SPAN&gt; In the JMP help about design diagnostics: "&lt;SPAN&gt;These diagnostics are not shown for designs that include factors with Changes set to Hard or Very Hard or effects with Estimability designated as If Possible." (source:&amp;nbsp;&lt;A href="https://www.jmp.com/support/help/en/16.2/index.shtml#page/jmp/design-diagnostics.shtml" target="_blank" rel="noopener"&gt;Design Diagnostics (jmp.com)&lt;/A&gt;). That's why you have a "worse" design diagnostic when you compare this design with a design that has this term estimability set to "Necessary" (as it is taken into account in the evaluation of the design this time).&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;But in the analysis, this term is still kept, because if a main effect is for example not significant, this makes some degrees of freedoms free in order to estimate another effect/term. You can read more details about this specific design (with "If possible" estimable terms) here :&amp;nbsp;&lt;A href="https://www.jmp.com/support/help/en/16.2/#page/jmp/optimality-criteria.shtml#ww600960" target="_blank" rel="noopener"&gt;Optimality Criteria (jmp.com)&lt;/A&gt;&amp;nbsp;There are several ways to analyze this type of designs, I have mentioned some in another of your post :&amp;nbsp;&lt;A href="https://community.jmp.com/t5/Discussions/Error-when-comparing-multiple-DOE-designs/m-p/559254/highlight/true#M77269" target="_blank" rel="noopener"&gt;Solved: Re: Error when comparing multiple DOE designs - JMP User Community&lt;/A&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;I would suggest to keep your design with estimability of term X4*X4 set to "If possible", and try augmenting your design if the responses you have measured show a strong indication that this term may be significant, and/or if you lack precision for your predicted response(s). DoE is not necessarily a "do it all in one time" approach, it is often best to start with smaller design, and augment them in the most efficient and optimal way (sequential/iterative approach).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;3. You will find your answers here :&amp;nbsp;&lt;A href="https://www.jmp.com/support/help/en/16.2/index.shtml#page/jmp/power-analysis.shtml#ww257671" target="_blank" rel="noopener"&gt;Power Analysis (jmp.com)&lt;/A&gt;&amp;nbsp;You can freely change the response anticipated (or coefficient anticipated) values to see how&amp;nbsp;anticipated coefficients behave (or inversely how response anticipated values change). Don't forget to change the intercept coefficient to have responses above 10 in your concrete case if you want to be closer to anticipated responses for your use case.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;4. The optimality criterion may change depending on the way you're augmenting the design. For example if you go from a screening design to a RSM design (by adding interactions and quadratic terms in the model), then optimality criterion may change from D to I optimal. Best option is to check which one is set by JMP (you can look in the table, just below the name of the table and above scripts, which criterion was chosen and how was the design generated), or to set it if you know in advance (red triangle in Custom Design platform, optimality criterion, and then choose the most relevant one).&lt;BR /&gt;&lt;BR /&gt;Hope this answer will help you,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 25 Oct 2022 11:45:54 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/559853#M77337</guid>
      <dc:creator>Victor_G</dc:creator>
      <dc:date>2022-10-25T11:45:54Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/560513#M77384</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/11568"&gt;@Victor_G&lt;/a&gt;, thank you very much for taking the time to explain everything and for pointing me to additional resources! It's very helpful!&lt;BR /&gt;&lt;BR /&gt;I was wondering, is the fact that&lt;U&gt;&lt;STRONG&gt; the G-efficiency (32) is lower than the D-efficiency (41) a concern if my goal is to optimize&lt;/STRONG&gt;&lt;/U&gt; and accurately predict the optimal response (not screen for factors)?&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;Also, I read the &lt;A href="https://www.jmp.com/support/help/en/16.2/index.shtml#page/jmp/power-analysis.shtml#ww257671" target="_self"&gt;JMP help page on "Design and Anticipated Responses Outline"&lt;/A&gt;. It specifically says "&lt;EM&gt;The Design and Anticipated Responses outline shows the design preceded by an Anticipated Response column. Each entry in the first column is the Anticipated Response corresponding to the design settings. The Anticipated Response is calculated using the Anticipated Coefficients...&lt;/EM&gt;The Anticipated Response is&lt;EM&gt;&lt;SPAN&gt; the&lt;/SPAN&gt;&amp;nbsp;response value obtained using the Anticipated Coefficient values as coefficients in the model. When the outline first appears, the calculation of Anticipated Response values is based on the default values in the Anticipated Coefficient column. When you set new values in the Anticipated Response column, click &lt;SPAN class=""&gt;Apply Changes to Anticipated Responses&lt;/SPAN&gt;&lt;/EM&gt;&lt;SPAN&gt;&lt;EM&gt; to update the Anticipated Coefficient and Power columns"&lt;/EM&gt;&lt;/SPAN&gt;.&lt;BR /&gt;However, I am still a bit confused as the explanation is quite brief. These anticipated response values are not a prediction of the response, right? These values don't mean that these response values are what I should expect after doing the experiments in the lab, do they?&lt;BR /&gt;&lt;STRONG&gt;These values currently range from -0.9 to 9 in my i_tv5 duplicated design (attached), but this doesn't mean that I should expect to obtain a response within this -0.9-9 range after I do the experiment in the lab, does it?&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;Thank you&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/11568"&gt;@Victor_G&lt;/a&gt;&amp;nbsp;!!&lt;/P&gt;</description>
      <pubDate>Wed, 26 Oct 2022 15:46:25 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/560513#M77384</guid>
      <dc:creator>ADouyon</dc:creator>
      <dc:date>2022-10-26T15:46:25Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/560594#M77389</link>
      <description>&lt;P&gt;Yes, that is exactly what it means. This information is a consistency check if you use the Power Analysis this way. You can enter anticipated coefficients and review the expected response for each run, or you can enter the anticipated answer and review the corresponding coefficients you will get from the regression analysis. They are just two different ways of telling JMP about the size of the effects you expect to perform the power analysis. They are different, but they must be consistent.&lt;/P&gt;</description>
      <pubDate>Wed, 26 Oct 2022 17:51:43 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/560594#M77389</guid>
      <dc:creator>Mark_Bailey</dc:creator>
      <dc:date>2022-10-26T17:51:43Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/560633#M77403</link>
      <description>&lt;P&gt;Thank you&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/5358"&gt;@Mark_Bailey&lt;/a&gt;&amp;nbsp;for clarifying!&lt;BR /&gt;&lt;BR /&gt;That makes sense!&lt;BR /&gt;So, right now the anticipated coefficients by default are all 1. Because all coefficients are the same value (=1), the anticipated responses are very similar between the various treatments/runs (ranging from -0.9 to 9). Right? However, in real life these values &lt;U&gt;&lt;STRONG&gt;will probably not all be the same&amp;nbsp;and also probably these values will not be equal to 1&lt;/STRONG&gt;&lt;/U&gt;. Did I understand this correctly?&lt;BR /&gt;&lt;BR /&gt;Thank you very much!!&lt;/P&gt;</description>
      <pubDate>Wed, 26 Oct 2022 19:59:51 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/560633#M77403</guid>
      <dc:creator>ADouyon</dc:creator>
      <dc:date>2022-10-26T19:59:51Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/560926#M77405</link>
      <description>&lt;P&gt;As such, the default values of 1 for the RMSE and all coefficients are useful. This setup reflects the case where you expect the absolute value of all the effects to be at least twice the RMSE It is a relative comparison.&lt;/P&gt;</description>
      <pubDate>Wed, 26 Oct 2022 20:33:39 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/560926#M77405</guid>
      <dc:creator>Mark_Bailey</dc:creator>
      <dc:date>2022-10-26T20:33:39Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/562144#M77517</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/44170"&gt;@ADouyon&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Concerning your first question about your G-efficiency lower than your D-efficiency, it's quite hard to compare different efficiencies metrics only on one design. Remember that efficiencies metrics are aggregated measures/statistics of optimality criteria, so they can provide a good overview when comparing designs, but they are not sufficient to take a decision.&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;If you are concerned about the precision of your predicted responses, I think the "Prediction Variance Profile" is much more informative, as you can explore and determine in which area of your experimental space your relative variance prediction is the highest. In your latest design, you will have a relative variance prediction quite high when X2 is at the lowest level (1) and X4 at the highest (10), which is normal because due to the disallowed combinations, no points will be in this area (see screenshot "Experimental-space-X4-X2"). This may also explain why the aggregated measure for G-optimality, G-efficiency, is low (because of the restricted experimental area, increasing the relative prediction variance).&lt;BR /&gt;So you can gain a good understanding visualizing your experimental points/space and looking at the "Prediction Variance profile". See&amp;nbsp;&lt;A href="https://www.jmp.com/support/help/en/16.2/#page/jmp/prediction-variance-profile.shtml?os=win&amp;amp;source=application#ww168138" target="_blank"&gt;Prediction Variance Profile (jmp.com)&lt;/A&gt;&amp;nbsp;for more details and for calculating actual variance of prediction.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Concerning your second question, these anticipated response values are not a prediction of the response, because if they were, that suppose you already know what are the exact value of the coefficients in your regression model, so you probably won't do an experimental design if you knew it already.&lt;/P&gt;&lt;P&gt;It's more a guide/help that you may use in two different ways :&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;To know, based on the model your create and the model coefficients that you can freely choose/change, which values you could expect from the experimental responses (don't forget to change the coefficient of your intercept to have response values in the range of what you would expect, and not centered around 0/1, depending on the other coefficients).&amp;nbsp;&lt;/LI&gt;&lt;LI&gt;To assess, with the help of Power column, what would be the lowest difference detectable (or the probability to detect and effect) with your design, for a specific significance level and RMSE.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I hope this response will be helpful,&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 29 Oct 2022 09:39:05 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/562144#M77517</guid>
      <dc:creator>Victor_G</dc:creator>
      <dc:date>2022-10-29T09:39:05Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/562932#M77599</link>
      <description>&lt;P&gt;Hello &lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/11568"&gt;@Victor_G&lt;/a&gt;&amp;nbsp;!&lt;BR /&gt;&lt;BR /&gt;Thank you so much for your reply, super helpful!!&lt;BR /&gt;&lt;BR /&gt;To follow up on that, how low does the prediction variance need to be? what would be the cut off value?&amp;nbsp;&lt;BR /&gt;In the JMP help page, it says "&lt;SPAN&gt;Low prediction variance is desired."&amp;nbsp;&lt;BR /&gt;I am in between two DOE designs, &lt;STRONG&gt;'tv5' (24 treatments, 48 runs because duplicated)&lt;/STRONG&gt; and &lt;STRONG&gt;'tv7' (12 treatments, 24 runs because&amp;nbsp;duplicated)&lt;/STRONG&gt;, both attached. In terms of predicted variance, the design 'tv5' has better (lower) predicted variance.&lt;BR /&gt;But, 'tv5' has double the number of treatments of 'tv7' (24 vs 12), which means I can do the experiments much faster with 'design tv7'. Design 'tv7' only has 3 factors; I dropped 'X4' to try to reduce the number of experiments needed. I'd like this first experiment to be just a proof of concept, to show what we can accomplish with JMP, so I prefer not too have too many treatments/runs in my first design, but I am worried that the predicted variance is too high for tv7, is it?&lt;BR /&gt;&lt;BR /&gt;Thanks a lot,&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/11568"&gt;@Victor_G&lt;/a&gt;&amp;nbsp;!!&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 01 Nov 2022 14:38:54 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/562932#M77599</guid>
      <dc:creator>ADouyon</dc:creator>
      <dc:date>2022-11-01T14:38:54Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/562981#M77607</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/44170"&gt;@ADouyon&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;There is no "hard rule" or "cut off value" on how low the (relative ?) prediction variance should be, as this requirement may differ depending on the topic, domain and business value. In some domains and specific applications (like pharmaceutical), there are norms about the required precision of the models, in some others this value is left at the discretion of the scientists/experimenters. There should be a compromise between an adequate precision according to the possible experimental budget and constraints of the design/experimental space.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm not suprised by the comparison and results you have between the two designs, as increasing the number of runs will decrease the prediction variance (you'll have more degrees of freedom, and consequently a better estimation of errors/noise and of effects).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you're in a "proof of concept" phase and not entirely sure which factors to enter in your model and design, I find quite dangerous to remove a factor if you're not entirely sure that this factor has no effect on the response(s).&lt;/P&gt;&lt;P&gt;To stay safe, I would prefer start with a design that has an higher variance of prediction (but more factors evaluated), in order to be sure not to miss an effect. Once this screening is done (on main effects and interactions), you can always augment your design to decrease prediction variance (predictive model) and/or enter new terms in the model (like quadratic effects for optimization of the model).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Your choice will depend on which stage you're in for the design (screening, optimisation, prediction ?) and the knowledge you already have about your process/product/formulation/...&lt;/P&gt;</description>
      <pubDate>Tue, 01 Nov 2022 18:39:22 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/562981#M77607</guid>
      <dc:creator>Victor_G</dc:creator>
      <dc:date>2022-11-01T18:39:22Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/563044#M77611</link>
      <description>&lt;P&gt;Thanks so much&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/11568"&gt;@Victor_G&lt;/a&gt;!! That makes sense!&lt;BR /&gt;&lt;BR /&gt;So, generally, is it good practice to test interaction terms first with the main factors, and augment later by adding quadratic terms? &amp;nbsp;(just confirming since your suggestion has this order).&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Thanks A LOT&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/11568"&gt;@Victor_G&lt;/a&gt;&amp;nbsp;!!&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 01 Nov 2022 19:07:55 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/563044#M77611</guid>
      <dc:creator>ADouyon</dc:creator>
      <dc:date>2022-11-01T19:07:55Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/563085#M77616</link>
      <description>&lt;P&gt;Hello &lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/44170"&gt;@ADouyon&lt;/a&gt;,&lt;BR /&gt;&lt;BR /&gt;Yes, generally you don't do only one design, but you keep augmenting it several times as your knowledge of the process/system/formulation increases.&lt;BR /&gt;&lt;BR /&gt;You can start by only screening main effects (depending on the number of factors and required number of runs you may add interaction terms directly in this phase), then interactions to optimize the knowledge about the system, and then quadratic effects to build a robust predictive model.&lt;BR /&gt;&lt;BR /&gt;Depending on the number and type of factors, there might be better alternatives : for example, for a high number of factors (&amp;gt;5, and only continuous or 2-levels categorical factors), Definitive Screening Designs (DSD) might be a good choice for screening main effects in an unbiased way, and have the possibility to detect some interaction and quadratic terms.&lt;BR /&gt;&lt;BR /&gt;Hope this answer will help you,&lt;/P&gt;</description>
      <pubDate>Tue, 01 Nov 2022 20:28:27 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/563085#M77616</guid>
      <dc:creator>Victor_G</dc:creator>
      <dc:date>2022-11-01T20:28:27Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/563106#M77619</link>
      <description>&lt;P&gt;Thanks&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/11568"&gt;@Victor_G&lt;/a&gt;&amp;nbsp;for the quick reply and for clear response!!&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;- So, but if one of my factors is time, then would you directly add the quadratic effects/terms in this first 'Proof of Concept' design, or would you still wait to add it later (in the next iteration) when augmenting?&lt;BR /&gt;I remember that in my very first post you warned me that if I have a factor that is time and is a continuous variable, I should have quadratic effects (&lt;A href="https://docs.google.com/presentation/d/1BOYJfa54Kt2MnH5TrSpSOOr6-GLIwlJKUhZPFK8gxXM/edit?usp=sharing" target="_blank"&gt;https://docs.google.com/presentation/d/1BOYJfa54Kt2MnH5TrSpSOOr6-GLIwlJKUhZPFK8gxXM/edit?usp=sharing&lt;/A&gt;). Although, my time factor is a discrete numeric (due to I can't accept decimals) factor with 3 levels, does this make a difference?&lt;BR /&gt;&lt;BR /&gt;- In JMP DOE, it seems that we really separate things into buckets depending on the purpose; 1) screening, 2) optimization, 3) predicting.&amp;nbsp;&lt;BR /&gt;But, do people always have such a clear separation in their work/design?&lt;BR /&gt;My design has 4 factors, of which 3 I know for sure have a significant main effect. There is only 1 factor (X4) that I am not sure if it will be very significant in the design (I suspect, it won't be as significant as the other 3, but I have no proof). I suspect that the interaction effects will be significant with most of the interaction terms (probably all those except the ones interacting with X4, which again I don't think will be as significant, but I have no proof). [And, I suspect I will have quadratic effects also...]&lt;BR /&gt;Thus, it seems that my design is like a mixture of screening (because I have one factor I don't know anything about) + optimization (because I have 3 factors that I am 99.999..% sure have a real significant effect). Would that make any sense? &amp;nbsp;Could you clarify this&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/11568"&gt;@Victor_G&lt;/a&gt;&amp;nbsp;since this affects the optimality criterion chosen to start with (I am confused if I should choose D- or I-optimality)?&lt;BR /&gt;&lt;BR /&gt;Thanks a million&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/11568"&gt;@Victor_G&lt;/a&gt;!!!&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 01 Nov 2022 21:16:37 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/563106#M77619</guid>
      <dc:creator>ADouyon</dc:creator>
      <dc:date>2022-11-01T21:16:37Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/563211#M77625</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/44170"&gt;@ADouyon&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;- You're right, since you add few factors to screen, I recommended you to add quadratic effect for time, since chemical or biological process are rarely linearly dependent over time, so it may be best to have this quadratic term directly in the model. And since you had "constraints" with this time factor (for decimals), you choose a discrete numeric type with 3 levels for this factor, which automatically adds the quadratic terms in the model with the estimability "if possible", so you should be ok with these settings.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;- These buckets are not entirely separated from each others, it's more a help and to guide you during your DoE creation process, to self-assess at which stage you are. Depending on your experimental budget, it can be tempting to do only one big DoE, but it may be a lack of ressources, as most of the effects won't probably be significant or have a big influence on the response. This is why I tend to describe the use of DoE during a study with these 3 steps : screening, optimization and prediction, as it helps to figure out what you already know about the system, and what you would like to know to further understand and predict your system. I used in previous DoE trainings the figure I have inserted as a screenshot here. But as you say, depending on the situation it may look a bit over-simplistic and all studies using DoE don't go through these 3 stages, as previous knowledge, domain experience or historical data can help have a fair understanding of the system.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In your case, your number of factors is quite low, but you're unsure about the significance of some effects (X4 and interactions/quadratic effect involving X4). So I would choose a D-optimality criterion in order to assess parameters estimates as precisely as possible, and have the possibility to sort significant effects from non-significant ones.&amp;nbsp;&lt;BR /&gt;Your design does make sense, and the 24-runs DoE ("i_tv5_Iopt_changedEst_remov2_24runs") done before replication looks like a good compromise between runs size and screening of effects. And due to the presence of 5 replicate runs, you should be able to estimate noise in your response and lack-of-fit.&lt;/P&gt;&lt;P&gt;It may not be already a predictive model (depending on the precision you want to have), but it's a first good step in assessing which terms are significant, and from there, you can augment the design to improve the prediction precision.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I hope this answer will help you,&lt;/P&gt;</description>
      <pubDate>Wed, 02 Nov 2022 09:14:56 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/563211#M77625</guid>
      <dc:creator>Victor_G</dc:creator>
      <dc:date>2022-11-02T09:14:56Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/563333#M77633</link>
      <description>&lt;P&gt;The prediction variance is helpful information when comparing two or more designs when prediction is the primary goal of this experiment—the lower the variance, the better the design.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Remember that there is no information about the variance of the response before the data is collected. This plot assumes that this variance is 1, which is unlikely. The plot also assumes the linear model and its assumptions, including constant variance throughout the response range. If you have an estimate of the response, then multiply the scale in this plot by your estimate to learn the actual prediction variance. The square root of this variance is the standard error of prediction.&lt;/P&gt;</description>
      <pubDate>Wed, 02 Nov 2022 12:27:19 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/563333#M77633</guid>
      <dc:creator>Mark_Bailey</dc:creator>
      <dc:date>2022-11-02T12:27:19Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/564332#M77684</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/11568"&gt;@Victor_G&lt;/a&gt;&amp;nbsp;, thank you VERY much for the clarification and quick response, as always!!&lt;BR /&gt;&lt;BR /&gt;I polished my design now to include all 4 factors (&lt;U&gt;not&lt;/U&gt; ignoring X4 since the effect of this factor is still completely unknown) and all interactions with estimability set to 'necessary', as well as the quadratic effect of my time variable (X3) also set to 'necessary'. I kept the other quadratic effects of the other 3 variables (X1, X2, X3) with estimability set to 'if possible' as I presume there will be curvature, so I'd like to augment the design in the future.&lt;BR /&gt;[I expect significant effects from the interaction terms with most factors, except interactions with variable X4, which factor I don't have any information in advance. That's why, I kept all the interactions to estimability set to 'necessary' in this polished design].&lt;BR /&gt;&lt;BR /&gt;In my pervious design,&amp;nbsp;&lt;SPAN&gt;"i_tv5_Iopt_changedEst_remov2_24runs"&lt;/SPAN&gt;, I had removed the terms 'X1*X1' and 'X2*X4', &lt;U&gt;but since I don't have any evidence of these not being significant&lt;/U&gt;, (and based on the various clarifications you helped me with) &lt;U&gt;I decided to keep them in the new polished design&lt;/U&gt;&amp;nbsp;('X1*X1' with estimability set to 'if-possible', and 'X2*X4' set to 'necessary').&lt;BR /&gt;&lt;BR /&gt;The polished design named "tv6_Dopt_24rns_QTermsIfPoss_ExceptX3X3_110322" is attached.&lt;BR /&gt;The variance seems to have increased now, but from what you've explained me, I understand this is okay since I plan to augment this design in the future.&lt;BR /&gt;&lt;BR /&gt;Now, I was wondering, how important is the "blocking" variable?&lt;BR /&gt;&lt;BR /&gt;My experiments can be run in a (reaction) container that can specifically test for&lt;U&gt;&amp;nbsp;up to 12 different conditions (treatments) at a time&lt;/U&gt;. Since, this design has 24 different treatments (24 runs), I will have to run 12 treatments first and complete that experiment. Then, separately, once I am done with this first batch of 12 treatments, I will do another experiment with the rest of the 12 treatments. Should I add the "blocking" variable, then?&lt;BR /&gt;I tried adding this "blocking" variable by inputting the number '12' where it says "Group runs into random blocks of size" (= 12) (design attached, named "tv7_liketv6_withBlocking"). I noticed that &amp;nbsp;this increased my variance slightly (see screenshot, color orange for this tv7 design.&amp;nbsp;&lt;BR /&gt;1- Did I do this part correctly&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/11568"&gt;@Victor_G&lt;/a&gt;&amp;nbsp;?&amp;nbsp;&lt;BR /&gt;2- And, since I have room in the container for duplicates, I can also now augment the design to duplicate it, right? What do I do with the "random block" variable when duplicating the design? See the 1st screenshot attached, should I also add it into the 'X, factor' box?&lt;BR /&gt;&lt;BR /&gt;Thanks a lot&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/11568"&gt;@Victor_G&lt;/a&gt;&amp;nbsp;!!&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 03 Nov 2022 17:07:03 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/564332#M77684</guid>
      <dc:creator>ADouyon</dc:creator>
      <dc:date>2022-11-03T17:07:03Z</dc:date>
    </item>
    <item>
      <title>Re: Comparing DoEs- Why D/G/A/I- efficiencies are all the SAME and terribly LOW?</title>
      <link>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/564333#M77685</link>
      <description>&lt;P&gt;Thank you&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/5358"&gt;@Mark_Bailey&lt;/a&gt;&amp;nbsp;for the clarification!!&lt;/P&gt;</description>
      <pubDate>Thu, 03 Nov 2022 17:10:42 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Comparing-DoEs-Why-D-G-A-I-efficiencies-are-all-the-SAME-and/m-p/564333#M77685</guid>
      <dc:creator>ADouyon</dc:creator>
      <dc:date>2022-11-03T17:10:42Z</dc:date>
    </item>
  </channel>
</rss>

