<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: &amp;quot;Surprising&amp;quot; results in an DSD-Design in Discussions</title>
    <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942144#M109489</link>
    <description>&lt;P&gt;In the DSD result you can find 5 significant pure effects and also a couple of potential significant but correlated interaction effects which need to de-aliased by adding extra runs&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 15 Apr 2026 14:58:26 GMT</pubDate>
    <dc:creator>frankderuyck</dc:creator>
    <dc:date>2026-04-15T14:58:26Z</dc:date>
    <item>
      <title>"Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941518#M109455</link>
      <description>&lt;P data-end="195" data-start="0"&gt;As part of a master’s thesis, a Definitive Screening Design with six continuous variables was conducted. We generated a standard design in JMP with 17 runs (+3 additional runs at the zero level).&lt;/P&gt;
&lt;P data-end="561" data-start="197"&gt;The results surprised me in that a considerable number of interaction and quadratic effects are highly significant. My initial suspicion was overfitting, but I cannot find any indications supporting that. We have an excellent R-squared (which is expected), no elevated VIFs, an unremarkable Durbin–Watson test, etc. Furthermore, a PRESS value of 0.044 is achieved.&lt;/P&gt;
&lt;P data-end="774" data-start="563"&gt;These findings remain the same or very similar even when strict heredity is disabled in the DSD analysis. In that case, 2–3 additional interaction or quadratic effects appear, which are again highly significant.&lt;/P&gt;
&lt;P data-is-only-node="" data-is-last-node="" data-end="898" data-start="776"&gt;Based on everything I know, I currently see no reason to doubt the validity of the results. Or am I overlooking something?&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="NominalGemsbok3_1-1776096715422.png" style="width: 400px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/99385iAD192E06C70E1FDE/image-size/medium?v=v2&amp;amp;px=400" role="button" title="NominalGemsbok3_1-1776096715422.png" alt="NominalGemsbok3_1-1776096715422.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 13 Apr 2026 16:16:19 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941518#M109455</guid>
      <dc:creator>NominalGemsbok3</dc:creator>
      <dc:date>2026-04-13T16:16:19Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941575#M109456</link>
      <description>&lt;P&gt;For a 6 factor design, this is a relatively "large" design and because there are only 27 estimable effects in a response surface with 6 factors you have a good change of estimating many of those effects. &amp;nbsp;Looking at the centerpoint runs, they range from 3.915-4.018, so a residual standard deviation of about 0.02. &amp;nbsp;So any effect that is much larger than 0.2 is going to have a good chance of being significant. &amp;nbsp;Looking at the fold over points, I see differences of ~0.6. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;So yes, this is entirely possible. &amp;nbsp;Sharing the actual data table would help with exploring this further. &amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 13 Apr 2026 18:27:26 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941575#M109456</guid>
      <dc:creator>MathStatChem</dc:creator>
      <dc:date>2026-04-13T18:27:26Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941589#M109458</link>
      <description>&lt;P&gt;There is essentially no way to respond as there is too little insight into the actual experiment. Is this an actual physical experiment (or simulation)? Was measurement error quantified á priori? How was your MSE estimated? &amp;nbsp;Was the change in response of any practical value? Statistical significance is a conditional statement. It is not meaningful unless the estimate of the MSE is a reasonable estimate of reality (and you understand what constitutes the estimate).&lt;/P&gt;</description>
      <pubDate>Mon, 13 Apr 2026 20:44:37 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941589#M109458</guid>
      <dc:creator>statman</dc:creator>
      <dc:date>2026-04-13T20:44:37Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941595#M109460</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/80178"&gt;@NominalGemsbok3&lt;/a&gt;,&lt;BR /&gt;&lt;BR /&gt;I may have complementary propositions and remarks regarding the results you have obtained.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;On a practical note, there are better metrics to assess any possible overfitting scenario: use Rsquare adjusted and the delta between Rsquare and Rsquare adjusted to be minimized (Rsquare tends to increase with the number of terms in the model, whereas &lt;A href="https://www.jmp.com/support/help/en/19.1/#page/jmp/summary-of-fit.shtml" target="_self"&gt;Rsquare adjusted&lt;/A&gt; considers the complexity/number of terms in the model and penalize too complex models), information criterion (like &lt;A href="https://www.jmp.com/support/help/en/19.1/#page/jmp/likelihood-aicc-and-bic.shtml" target="_self"&gt;AICc and BIC&lt;/A&gt;) which helps compare different models balances between accuracy and complexity. &lt;A href="https://www.jmp.com/support/help/en/19.1/#page/jmp/press.shtml" target="_self"&gt;PRESS&lt;/A&gt; values (Rsquare and RMSE) are also good indicators about model performances, as they are based on Leave-One-Out cross-validation, so you might have a glimpse of model performances for points not used in model training. All these metrics help model selection and avoid falling into the trap of the "Cult of&amp;nbsp;&lt;LI-MESSAGE title="Statistical Significance&amp;quot;." uid="765772" url="https://community.jmp.com/t5/Discussions/Statistical-Significance/m-p/765772#U765772" discussion_style_icon_css="lia-mention-container-editor-message lia-img-icon-forum-thread lia-fa-icon lia-fa-forum lia-fa-thread lia-fa"&gt;&lt;/LI-MESSAGE&gt;&lt;/LI&gt;
&lt;LI&gt;Also, statistical significance is different from practical importance of the effects: if you have a very high signal/noise ratio, you might detect statistically effects more easily, even if they have no practical importance. Use domain expertise to filter the relevant and important effects and guide the model selection process.&lt;/LI&gt;
&lt;LI&gt;Regarding the number of active effects, what is reassuring is to see that:
&lt;UL&gt;
&lt;LI&gt;The design and modeling approach are correctly chosen for the number of active terms detected. From the &lt;A href="https://www.jmp.com/support/help/en/19.1/#page/jmp/overview-of-the-fit-definitive-screening-platform.shtml#ww292638" target="_self"&gt;JMP Help on Fit Definitive Screening&lt;/A&gt; : "&lt;EM&gt;A minimum run-size DSD is capable of correctly identifying active terms with high probability if the number of active effects is less than about half the number of runs and if the effects sizes exceed twice the standard deviation&lt;/EM&gt;". With 9 active terms detected out of 17 original runs (+3 added centre points) and a relatively low RMSE, you seem to be in a good situation like mentioned.&lt;/LI&gt;
&lt;LI&gt;They do respect &lt;A href="https://www.jmp.com/support/help/en/19.1/#page/jmp/effect-hierarchy.shtml" target="_self"&gt;Effect Hierarchy&lt;/A&gt;&amp;nbsp;principle: the higher the order of the effect, the less likely this effect will explain variation in the response. If we look at your situation, 5/6 (83%) of main effects are detected as significant, as well as 3/15 (20%) of the two-factor interactions and 1/6 (16,7%) of the quadratic effects. The proportion of the active effects may seem high, but it tends to follow the same regularity between the order of the effects as in the paper from Xiang Li, Nandan Sudarsanam, and Daniel D. Frey (March 2006) "Regularities in Data from Factorial Experiments" &lt;A href="https://doi.org/10.1002/cplx.20123" target="_blank" rel="noopener"&gt;https://doi.org/10.1002/cplx.20123&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;It seems your results are the conclusion of a careful selection of potentially relevant predictors, a low level of noise in experimentation and response measurement, an appropriate design choice and modeling approach. Congrats !&lt;BR /&gt;&lt;BR /&gt;Hope this answer will help you trust these positive results,&lt;/P&gt;</description>
      <pubDate>Mon, 13 Apr 2026 23:24:19 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941595#M109460</guid>
      <dc:creator>Victor_G</dc:creator>
      <dc:date>2026-04-13T23:24:19Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941636#M109461</link>
      <description>&lt;P&gt;It is a real and actual physical experiment.&amp;nbsp;The MSE was estimated from the residuals of a linear model. To validate this estimate, pure error was obtained from replicated center points (variance ≈ 0.002). A lack-of-fit test indicated no significant model inadequacy (p = 0.79), with the mean square for lack of fit being smaller than that of the pure error. Therefore, the MSE can be considered an appropriate estimate of the error variance.&lt;/P&gt;</description>
      <pubDate>Tue, 14 Apr 2026 06:21:22 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941636#M109461</guid>
      <dc:creator>NominalGemsbok3</dc:creator>
      <dc:date>2026-04-14T06:21:22Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941637#M109462</link>
      <description>&lt;P&gt;Thank you very much for this encouraging and detailed response.&lt;/P&gt;</description>
      <pubDate>Tue, 14 Apr 2026 06:23:34 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941637#M109462</guid>
      <dc:creator>NominalGemsbok3</dc:creator>
      <dc:date>2026-04-14T06:23:34Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941665#M109466</link>
      <description>&lt;P&gt;Thanks for your response. What do you mean with the "actual data table"?&lt;/P&gt;</description>
      <pubDate>Tue, 14 Apr 2026 08:24:01 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941665#M109466</guid>
      <dc:creator>NominalGemsbok3</dc:creator>
      <dc:date>2026-04-14T08:24:01Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941735#M109471</link>
      <description>&lt;P&gt;Is it possible to send a coded JMP file? I would like to try some GENREG models&lt;/P&gt;</description>
      <pubDate>Tue, 14 Apr 2026 11:37:27 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941735#M109471</guid>
      <dc:creator>frankderuyck</dc:creator>
      <dc:date>2026-04-14T11:37:27Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941913#M109479</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;
&lt;P&gt;here an coded JMP file.&lt;/P&gt;</description>
      <pubDate>Tue, 14 Apr 2026 18:28:54 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941913#M109479</guid>
      <dc:creator>NominalGemsbok3</dc:creator>
      <dc:date>2026-04-14T18:28:54Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941970#M109483</link>
      <description>&lt;P&gt;I did a quick analysis (fit model) and my results are completely different than yours? Three main effects are the largest effects. I added the fit model to your data table (Tabelle1)&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 14 Apr 2026 21:19:38 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/941970#M109483</guid>
      <dc:creator>statman</dc:creator>
      <dc:date>2026-04-14T21:19:38Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942044#M109485</link>
      <description>&lt;P&gt;Could it be that the difference arises because this is a Definitive Screening Design, and I used the function DoE → Definitive Screening → Fit Definitive Screening in JMP for the analysis, while you did not? I believe that DSDs should definitely be analyzed using the appropriate method.&lt;/P&gt;</description>
      <pubDate>Wed, 15 Apr 2026 07:25:47 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942044#M109485</guid>
      <dc:creator>NominalGemsbok3</dc:creator>
      <dc:date>2026-04-15T07:25:47Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942132#M109487</link>
      <description>&lt;P&gt;Be aware that in a DSD many interaction effects are correlated (highly aliased) R &amp;gt; 0,7 cfr. below pink fields and Design evaluation - Color map on Correlations - in annexed table. When you have 5 active factors you will need to augment the DSD and add extra runs to de-aliase interaction effects.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="frankderuyck_0-1776262859844.png" style="width: 400px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/99691iA1C331FBE577C8AF/image-size/medium?v=v2&amp;amp;px=400" role="button" title="frankderuyck_0-1776262859844.png" alt="frankderuyck_0-1776262859844.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 15 Apr 2026 14:23:50 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942132#M109487</guid>
      <dc:creator>frankderuyck</dc:creator>
      <dc:date>2026-04-15T14:23:50Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942133#M109488</link>
      <description>&lt;P&gt;If you run the Fit Definitive Screening Design and select "Make Model", what do you end up with? You realize the p-value significance is based on the replicated center point.&lt;/P&gt;</description>
      <pubDate>Wed, 15 Apr 2026 14:48:20 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942133#M109488</guid>
      <dc:creator>statman</dc:creator>
      <dc:date>2026-04-15T14:48:20Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942144#M109489</link>
      <description>&lt;P&gt;In the DSD result you can find 5 significant pure effects and also a couple of potential significant but correlated interaction effects which need to de-aliased by adding extra runs&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 15 Apr 2026 14:58:26 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942144#M109489</guid>
      <dc:creator>frankderuyck</dc:creator>
      <dc:date>2026-04-15T14:58:26Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942148#M109490</link>
      <description>&lt;P&gt;In attached table DSD analysis using the fit DSD platform; two significant correlated interaction effects are detected: (1) factor1 * factor3 corralated with factor2*factor5 and (2) factor4*factor5 correlated with factor1*factor2.&lt;/P&gt;
&lt;P&gt;Remark as you notice in the pink fields in the correlation color map there is also aliasing with factor6 2nd interaction effects; however as factor 6 is not an active effect I did not take its interactions into account; a very good model can be constructed with iteractions from the 5 significant main effects.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 15 Apr 2026 15:19:08 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942148#M109490</guid>
      <dc:creator>frankderuyck</dc:creator>
      <dc:date>2026-04-15T15:19:08Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942444#M109506</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/283"&gt;@frankderuyck&lt;/a&gt;&amp;nbsp;and&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/80178"&gt;@NominalGemsbok3&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;There is no ultimate best model, for multiple reasons: choice of performance metric, threshold (for p-value for example), estimation method, etc ... And there are not enough unique treatments (degree of freedom) in a DSD to estimate all effects, so you can easily end up with different but competing models with good performances. You could see a DSD as a supersaturated design type for response surface model.&lt;/P&gt;
&lt;P&gt;As&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/283"&gt;@frankderuyck&lt;/a&gt;&amp;nbsp;mentioned, due to the presence of partial aliases/correlations between interaction effects, and also between quadratic effects due to the design structure, you can't be 100% sure about the "real" impact of interaction effects and quadratic effects on your target that are detected (by any modeling methods), unless you add runs to better inform your model. You can have however more confidence about main effects, as the design structure avoids having any correlation between main effects and between main effects and higher order effects, so you can estimate them without any bias.&lt;/P&gt;
&lt;P&gt;"&lt;STRONG&gt;&lt;EM&gt;All models are wrong, but some are useful&lt;/EM&gt;&lt;/STRONG&gt;"&lt;BR /&gt;I tried to create a specific visualization called raster plot (see &lt;LI-MESSAGE title="Raster plots or other visualization tools to help model evaluation and selection for DoEs" uid="730968" url="https://community.jmp.com/t5/JMP-Wish-List/Raster-plots-or-other-visualization-tools-to-help-model/m-p/730968#U730968" discussion_style_icon_css="lia-mention-container-editor-message lia-img-icon-idea-thread lia-fa-icon lia-fa-idea lia-fa-thread lia-fa"&gt;&lt;/LI-MESSAGE&gt;&amp;nbsp;to see how it has been created) on this example to show this m&lt;SPAN&gt;ultiplicity of models due to the combinatorial explosion of possible terms included in the model (besides the intercept, there are 27 possible effect terms: 6 main effects, 15 two-factor interactions and 6 quadratic effects to choose from), using the platform Stepwise and the option "All Possible Models", (up to 10 terms in the model with strong heredity assumption).&lt;/SPAN&gt;&amp;nbsp;Here is the result of the models, sorted by Rsquare value, which shows which terms (in columns) are included for each model (each line):&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Victor_G_0-1776361380513.png" style="width: 400px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/99835i234385C7F11FF96C/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Victor_G_0-1776361380513.png" alt="Victor_G_0-1776361380513.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;I prefer using an information criterion for comparing multiple models, such as AICc (the lower the better), as it penalizes the use of too many terms and allow a better comparison for models with different complexities:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Victor_G_6-1776365928854.png" style="width: 400px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/99844iFF9AEB2BF4D20BF1/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Victor_G_6-1776365928854.png" alt="Victor_G_6-1776365928854.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;As you can see, most of the models do agree on the presence of the main effects of the first 5 factors. For factor 6, the results are different and there is no obvious pattern of presence of this main effect. For interactions and quadratic effects, it's also hard to see some strong patterns, except that some higher order effects don't seem to be included most of the time: interactions factor 1 x factor 6, factor 2 x factor 4, factor 3 x factor 5, factor 3 x factor 6, factor 4 x factor 6 and factor 5 x factor 6. For quadratic effect, factor 6 x factor 6 is absent most of the time in models. If we zoom in a little on the best models according to Rsquare value, there are some interesting observations on higher order effects :&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Victor_G_4-1776361794661.png" style="width: 400px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/99836i82CD7B202CBE19A4/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Victor_G_4-1776361794661.png" alt="Victor_G_4-1776361794661.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Interactions Factor 1 x Factor 3, Factor 4 x Factor 5 tends to be often chosen in models. Moreover, quadratic effects for factor 2 and factor 4 are also often selected. These results tend to agree with the results I obtained from the Fit Definitive Screening platform, with the same main effects and higher order effects detected:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Victor_G_5-1776361972029.png" style="width: 400px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/99837iC8E5031CF48DE350/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Victor_G_5-1776361972029.png" alt="Victor_G_5-1776361972029.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;When limiting the comparison to three different estimation methods, you can also see this situation of different and equivalent models and terms combination. For example with Fit Definitive Screening, GenReg Normal Pruned Forward and GenReg Two Stage Forward estimation methods, we can compare both the performances of the models and the terms included:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Performances: here with Rsquare and Rsquare adjusted for explainative purposes (how much the model explains the variability in the response):&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Victor_G_0-1776366685509.png" style="width: 400px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/99847iFCC711CBB265B3AF/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Victor_G_0-1776366685509.png" alt="Victor_G_0-1776366685509.png" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We can see that the first two methods show similar performances.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Terms in the models:&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Victor_G_1-1776366757150.png" style="width: 400px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/99848iCE9E12B80A20DFFC/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Victor_G_1-1776366757150.png" alt="Victor_G_1-1776366757150.png" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Even if the two first estimation methods provide models with similar methods, the terms included for higher order effect are different. They only agree on the inclusion of interaction effect Factor 1 x Factor 3.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So a reasonable follow-up would be to discuss with domain experts about which model(s) are the most sensible/reasonable, and use the platform&amp;nbsp;&lt;A href="https://www.jmp.com/support/help/en/19.1/#page/jmp/augment-designs.shtml?_gl=1*gfbjh6*_up*MQ..*_ga*MTA0MjYwODM5NS4xNzc2MzY2OTcy*_ga_BRNVBEC1RS*czE3NzYzNjY5NzEkbzEkZzAkdDE3NzYzNjY5NzEkajYwJGwwJGgw#" target="_blank" rel="noopener"&gt;Augment Designs&lt;/A&gt;&amp;nbsp;to confirm and/or precise the most relevant model. You can for example augment it and specify the model for which you want to estimate the terms.&lt;/P&gt;
&lt;P&gt;Please find the table with all scripts used in my response.&lt;BR /&gt;Hope this answer will help you,&lt;/P&gt;
&lt;DIV id="tinyMceEditor_1986072316aec6Victor_G_3" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;</description>
      <pubDate>Thu, 16 Apr 2026 23:04:49 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942444#M109506</guid>
      <dc:creator>Victor_G</dc:creator>
      <dc:date>2026-04-16T23:04:49Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942525#M109508</link>
      <description>&lt;P&gt;A DSD still is a screening design appropriate to screen out strong effects - particularely main effects - from a larger set of potential factors; it will also give an indication for potential quadratic effects and interactions&amp;nbsp; be aware that the latter are correlated (!) cfr. color map on correcaltions. If you get three or less active effect the DSD will yield a RSM. But! if &amp;gt; 3 significant effects you will need to augment the DSD to determine pure interaction &amp;amp; quadratic effects. Therefore, if after brainstorming with experts,&amp;nbsp; there arer probably interaction effects I always start with minimal or low&amp;nbsp;#DSD sufficient to detect strong effects and there are enough runs left in budget for augmentatio and de-aliasing. Also I would not spent too much effort on center point replication.&lt;/P&gt;</description>
      <pubDate>Fri, 17 Apr 2026 07:58:24 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942525#M109508</guid>
      <dc:creator>frankderuyck</dc:creator>
      <dc:date>2026-04-17T07:58:24Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942565#M109514</link>
      <description>&lt;P&gt;Off the record: Strange that all 5 active effects don't show up in the half normal plot, only three effects show up?&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="frankderuyck_0-1776418183497.png" style="width: 400px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/99909i54C872F2218FF1DA/image-size/medium?v=v2&amp;amp;px=400" role="button" title="frankderuyck_0-1776418183497.png" alt="frankderuyck_0-1776418183497.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 17 Apr 2026 09:30:50 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942565#M109514</guid>
      <dc:creator>frankderuyck</dc:creator>
      <dc:date>2026-04-17T09:30:50Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942594#M109515</link>
      <description>&lt;P&gt;You have to be careful evaluating the normal/half normal plots (See Daniel). You can't always use Lenth's PSE. If you look at the data, there appears to be more than one distribution of errors. Notice the bottom 6 values and then the break, then 2, then another break then 5. This is indicative of a change in noise during the experiment. It might be considered evidence of a &lt;EM&gt;special cause&lt;/EM&gt; during the experiment. Look at the Pareto chart that coincides with this half normal plot. You will see the "grouping" of estimates more easily.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;On a side note...replicating center points can be an excellent way to estimate the MSE. CP's can be used to test the linear assumption quite efficiently even though it is not specific. If you run enough of them randomly (~8) throughout the experiment, they can also assess stability over the experiment. Plot the MR in run order. Also If they are, for example, current conditions, they allow to set levels bolder on either side of current to get better directional insight. The only problem is the factors must be continuous.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 17 Apr 2026 13:56:15 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942594#M109515</guid>
      <dc:creator>statman</dc:creator>
      <dc:date>2026-04-17T13:56:15Z</dc:date>
    </item>
    <item>
      <title>Re: "Surprising" results in an DSD-Design</title>
      <link>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942595#M109516</link>
      <description>&lt;P&gt;Interesting, how then use this plot better to detect active effects?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 17 Apr 2026 13:55:06 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/quot-Surprising-quot-results-in-an-DSD-Design/m-p/942595#M109516</guid>
      <dc:creator>frankderuyck</dc:creator>
      <dc:date>2026-04-17T13:55:06Z</dc:date>
    </item>
  </channel>
</rss>

