<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Model verification/validation in Discussions</title>
    <link>https://community.jmp.com/t5/Discussions/Model-verification-validation/m-p/236831#M46743</link>
    <description>Many thanks for the tip. Actually, there are differences (but all rather small differences).</description>
    <pubDate>Thu, 28 Nov 2019 14:22:23 GMT</pubDate>
    <dc:creator>Hugo_JOHAN</dc:creator>
    <dc:date>2019-11-28T14:22:23Z</dc:date>
    <item>
      <title>Model verification/validation</title>
      <link>https://community.jmp.com/t5/Discussions/Model-verification-validation/m-p/236652#M46706</link>
      <description>&lt;P&gt;Dear all,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I generated a model from 20 data generated with a DoE.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I also run in parallel 20 runs designed with a LHS on the same design space.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I would like to compare the observed responses obtained with the LHS to the predicted responses obtained with the DoE.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Which kind of procedure would you suggest to validate/verify the initial DoE model ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I used equivalence testing but it looks as a stringent test (I mean, a model seems to be highly performing to pass such a test).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please note that I do not have JMP Pro,&lt;/P&gt;&lt;P&gt;Many thanks,&lt;/P&gt;&lt;P&gt;Hugo&lt;/P&gt;&lt;P&gt;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/2742"&gt;@martindemel&lt;/a&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 27 Nov 2019 09:28:35 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Model-verification-validation/m-p/236652#M46706</guid>
      <dc:creator>Hugo_JOHAN</dc:creator>
      <dc:date>2019-11-27T09:28:35Z</dc:date>
    </item>
    <item>
      <title>Re: Model verification/validation</title>
      <link>https://community.jmp.com/t5/Discussions/Model-verification-validation/m-p/236703#M46718</link>
      <description>&lt;P&gt;It's not clear to me what an 'LHS' is...but I'd start simple with respect to comparing the results. I'll presume an 'LHS' is some sort of empirical investigation of the same treatment combination for treatment combination corresponding to the treatment combinations used to make the predictions across the original DOE's design space? If that's the case, I'd start simple. Maybe just a histogram of the difference between the predicted values and the 'LHS' values. Then maybe a scatter plot (Graph Builder or Fit Y by X) of the predicted vs. LHS values, with the predicted values on one axis, the LHS on the other.&lt;/P&gt;</description>
      <pubDate>Wed, 27 Nov 2019 15:10:53 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Model-verification-validation/m-p/236703#M46718</guid>
      <dc:creator>P_Bartell</dc:creator>
      <dc:date>2019-11-27T15:10:53Z</dc:date>
    </item>
    <item>
      <title>Re: Model verification/validation</title>
      <link>https://community.jmp.com/t5/Discussions/Model-verification-validation/m-p/236704#M46719</link>
      <description>&lt;P&gt;You might use a matched pairs t-test for a significant difference between predicted response and observed response. That is, use the original model to predict the response for each run in the Latin hyper-square design. Your data table will use two data columns for this analysis. Enter them as predicted and observed in the Y role for Matched Pairs launch.&lt;/P&gt;</description>
      <pubDate>Wed, 27 Nov 2019 15:17:46 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Model-verification-validation/m-p/236704#M46719</guid>
      <dc:creator>Mark_Bailey</dc:creator>
      <dc:date>2019-11-27T15:17:46Z</dc:date>
    </item>
    <item>
      <title>Re: Model verification/validation</title>
      <link>https://community.jmp.com/t5/Discussions/Model-verification-validation/m-p/236705#M46720</link>
      <description>&lt;P&gt;Just curious: what is the difference, if any, between the models obtained from the two DOEs for the same factors?&lt;/P&gt;</description>
      <pubDate>Wed, 27 Nov 2019 15:18:38 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Model-verification-validation/m-p/236705#M46720</guid>
      <dc:creator>Mark_Bailey</dc:creator>
      <dc:date>2019-11-27T15:18:38Z</dc:date>
    </item>
    <item>
      <title>Re: Model verification/validation</title>
      <link>https://community.jmp.com/t5/Discussions/Model-verification-validation/m-p/236816#M46740</link>
      <description>LHS states for Latin Hypercube Sampling and is a space filling design. I used the same factors as for the DoE to design this LHS design. Thanks for the suggestions.</description>
      <pubDate>Thu, 28 Nov 2019 13:25:04 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Model-verification-validation/m-p/236816#M46740</guid>
      <dc:creator>Hugo_JOHAN</dc:creator>
      <dc:date>2019-11-28T13:25:04Z</dc:date>
    </item>
    <item>
      <title>Re: Model verification/validation</title>
      <link>https://community.jmp.com/t5/Discussions/Model-verification-validation/m-p/236831#M46743</link>
      <description>Many thanks for the tip. Actually, there are differences (but all rather small differences).</description>
      <pubDate>Thu, 28 Nov 2019 14:22:23 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Model-verification-validation/m-p/236831#M46743</guid>
      <dc:creator>Hugo_JOHAN</dc:creator>
      <dc:date>2019-11-28T14:22:23Z</dc:date>
    </item>
  </channel>
</rss>

