<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: compare results in Discussions</title>
    <link>https://community.jmp.com/t5/Discussions/compare-results/m-p/829919#M101231</link>
    <description>&lt;P&gt;Right click on the table box and select either make into data table or make combined data table&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jthi_0-1738319211301.png" style="width: 400px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/72408i3A74A242C585CE34/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jthi_0-1738319211301.png" alt="jthi_0-1738319211301.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 31 Jan 2025 10:27:13 GMT</pubDate>
    <dc:creator>jthi</dc:creator>
    <dc:date>2025-01-31T10:27:13Z</dc:date>
    <item>
      <title>compare results</title>
      <link>https://community.jmp.com/t5/Discussions/compare-results/m-p/829796#M101213</link>
      <description>&lt;P&gt;Hello,&lt;BR /&gt;How can I perform a statistical test in JMP to ensure that the results obtained from different classification methods are not due to randomness after obtaining various results?&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="maryam_nourmand_0-1738262120741.png" style="width: 400px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/72396iFDD5591847146050/image-size/medium?v=v2&amp;amp;px=400" role="button" title="maryam_nourmand_0-1738262120741.png" alt="maryam_nourmand_0-1738262120741.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jan 2025 18:35:51 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/compare-results/m-p/829796#M101213</guid>
      <dc:creator>maryam_nourmand</dc:creator>
      <dc:date>2025-01-30T18:35:51Z</dc:date>
    </item>
    <item>
      <title>Re: compare results</title>
      <link>https://community.jmp.com/t5/Discussions/compare-results/m-p/829830#M101216</link>
      <description>&lt;P&gt;I'm not aware of any statistical tests for whether these different models vary "significantly" or not, but I don't think that is the right way to proceed.&amp;nbsp; You have a number of relevant things to compare.&amp;nbsp; Using the validation data (or test data if you have a large enough data set to have created that), you can look at ROC, misclassification rates, entropy and generalized R square measures, and detailed looks at the misclassifications made by each model - with appropriate adjustments in the cutoff probabilities for the classifications.&amp;nbsp; What I tend to do is look closely at the prediction probability distributions that each model gives and evaluate the usefulness of each model for the problem at hand.&amp;nbsp; Rarely are false positives and false negatives of equal importance, so you really want a model that works "well" for the decision problem you are modeling.&amp;nbsp; And the predicted probability distributions usually provide rich evidence to use in making that evaluation.&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jan 2025 20:59:24 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/compare-results/m-p/829830#M101216</guid>
      <dc:creator>dlehman1</dc:creator>
      <dc:date>2025-01-30T20:59:24Z</dc:date>
    </item>
    <item>
      <title>Re: compare results</title>
      <link>https://community.jmp.com/t5/Discussions/compare-results/m-p/829917#M101230</link>
      <description>&lt;P&gt;how can i save r2 results in windows table?because i want do anova test on it&lt;/P&gt;</description>
      <pubDate>Fri, 31 Jan 2025 10:22:37 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/compare-results/m-p/829917#M101230</guid>
      <dc:creator>maryam_nourmand</dc:creator>
      <dc:date>2025-01-31T10:22:37Z</dc:date>
    </item>
    <item>
      <title>Re: compare results</title>
      <link>https://community.jmp.com/t5/Discussions/compare-results/m-p/829919#M101231</link>
      <description>&lt;P&gt;Right click on the table box and select either make into data table or make combined data table&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jthi_0-1738319211301.png" style="width: 400px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/72408i3A74A242C585CE34/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jthi_0-1738319211301.png" alt="jthi_0-1738319211301.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 31 Jan 2025 10:27:13 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/compare-results/m-p/829919#M101231</guid>
      <dc:creator>jthi</dc:creator>
      <dc:date>2025-01-31T10:27:13Z</dc:date>
    </item>
    <item>
      <title>Re: compare results</title>
      <link>https://community.jmp.com/t5/Discussions/compare-results/m-p/830000#M101238</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/56938"&gt;@maryam_nourmand&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If your objective is to evaluate model performances robustness, I can only recommend and emphasize the options brought by&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/53879"&gt;@dlehman1&lt;/a&gt;.&lt;/P&gt;
&lt;P&gt;There are several way to evaluate model's robustness :&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;By measuring model performances under various random seeds (as randomness is part of the learning in ML model : &lt;A href="https://mindfulmodeler.substack.com/p/no-learning-without-randomness" target="_self"&gt;"No Learning without randomness"&lt;/A&gt;),&lt;/LI&gt;
&lt;LI&gt;By measuring model performances under different training and validation sets.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;On the second option, the use of a Validation formula column and the "Simulate" option enable to try the model under a large number of different training conditions/sets. You can read more about how to do it in JMP in the following posts :&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-MESSAGE title="How can I automate and summarize many repeat validations into one output table?" uid="623831" url="https://community.jmp.com/t5/Discussions/How-can-I-automate-and-summarize-many-repeat-validations-into/m-p/623831#U623831" discussion_style_icon_css="lia-mention-container-editor-message lia-img-icon-forum-thread lia-fa-icon lia-fa-forum lia-fa-thread lia-fa"&gt;&lt;/LI-MESSAGE&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-MESSAGE title="Boosted Tree - Tuning TABLE DESIGN" uid="609484" url="https://community.jmp.com/t5/Discussions/Boosted-Tree-Tuning-TABLE-DESIGN/m-p/609484#U609484" discussion_style_icon_css="lia-mention-container-editor-message lia-img-icon-forum-thread lia-fa-icon lia-fa-forum lia-fa-thread lia-fa"&gt;&lt;/LI-MESSAGE&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the results from these simulations, you can then visualize and compare performances distributions between different models, and eventually do some statistical testing if needed, for example compare mean/median/variance performances of the prediction simulations between different models.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/53879"&gt;@dlehman1&lt;/a&gt;&amp;nbsp;stated, default metrics are interesting, but you might want to fine-tune them, as classification errors may not have the same "importance".&lt;/P&gt;
&lt;P&gt;Be also careful about probabilities displayed by ML models if you intend to use them as "confidence levels", they are not all calibrated, and depending on the models chosen and the sample size, it can have different impact and consequences. More about the calibration topic :&lt;/P&gt;
&lt;P&gt;&lt;A href="https://scikit-learn.org/stable/auto_examples/calibration/plot_compare_calibration.html" target="_blank" rel="noopener"&gt;https://scikit-learn.org/stable/auto_examples/calibration/plot_compare_calibration.html&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://ploomber.io/blog/calibration-curve/" target="_blank" rel="noopener"&gt;https://ploomber.io/blog/calibration-curve/&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hope this answer and few considerations may help,&lt;/P&gt;</description>
      <pubDate>Fri, 31 Jan 2025 15:58:15 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/compare-results/m-p/830000#M101238</guid>
      <dc:creator>Victor_G</dc:creator>
      <dc:date>2025-01-31T15:58:15Z</dc:date>
    </item>
  </channel>
</rss>

