<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Bayesian optimization-exporting the underlying model in Discussions</title>
    <link>https://community.jmp.com/t5/Discussions/Bayesian-optimization-exporting-the-underlying-model/m-p/914358#M107433</link>
    <description>&lt;P&gt;Hello,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; I have enjoyed playing around with the new Bayesian optimization platform. However, I want to compare the results to those of some machine learning models. I see in the output that you DO get a "leave-on-out" R2, but is that truly based on a single random row? Or does some "behind the scenes" breaking of the dataset into training and validation portions going on? Can I know which row(s) was chosen as the holdback?&lt;/P&gt;
&lt;P&gt;Also, is it possible to see the actual model? I have used the "save all model fits" and "save script for next iteration," but I'm not sure if these are the complete models. I want to see how well the model predicts new data, though I realize this may not be exactly what this platform is meant to do.&lt;/P&gt;
&lt;P&gt;Alternatively, could I "load" the optimal GP model into Gaussian Process? I can't see an easy way to do that, especially since I have multiple thetas, but if I can run the optimal GP generated in the GP platform, this might actually solve all of these problems/questions!&lt;/P&gt;</description>
    <pubDate>Thu, 20 Nov 2025 03:09:44 GMT</pubDate>
    <dc:creator>abmayfield</dc:creator>
    <dc:date>2025-11-20T03:09:44Z</dc:date>
    <item>
      <title>Bayesian optimization-exporting the underlying model</title>
      <link>https://community.jmp.com/t5/Discussions/Bayesian-optimization-exporting-the-underlying-model/m-p/914358#M107433</link>
      <description>&lt;P&gt;Hello,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; I have enjoyed playing around with the new Bayesian optimization platform. However, I want to compare the results to those of some machine learning models. I see in the output that you DO get a "leave-on-out" R2, but is that truly based on a single random row? Or does some "behind the scenes" breaking of the dataset into training and validation portions going on? Can I know which row(s) was chosen as the holdback?&lt;/P&gt;
&lt;P&gt;Also, is it possible to see the actual model? I have used the "save all model fits" and "save script for next iteration," but I'm not sure if these are the complete models. I want to see how well the model predicts new data, though I realize this may not be exactly what this platform is meant to do.&lt;/P&gt;
&lt;P&gt;Alternatively, could I "load" the optimal GP model into Gaussian Process? I can't see an easy way to do that, especially since I have multiple thetas, but if I can run the optimal GP generated in the GP platform, this might actually solve all of these problems/questions!&lt;/P&gt;</description>
      <pubDate>Thu, 20 Nov 2025 03:09:44 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Bayesian-optimization-exporting-the-underlying-model/m-p/914358#M107433</guid>
      <dc:creator>abmayfield</dc:creator>
      <dc:date>2025-11-20T03:09:44Z</dc:date>
    </item>
    <item>
      <title>Re: Bayesian optimization-exporting the underlying model</title>
      <link>https://community.jmp.com/t5/Discussions/Bayesian-optimization-exporting-the-underlying-model/m-p/914474#M107451</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/12111"&gt;@abmayfield&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;As the documentation on&amp;nbsp;&lt;A href="https://www.jmp.com/support/help/en/19.0/#page/jmp/bayesian-optimization.shtml?_gl=1*6ev4x7*_up*MQ..*_ga*MTEzNzA1OTM5Ni4xNzYzNjM0OTIy*_ga_BRNVBEC1RS*czE3NjM2MzQ5MjEkbzEkZzAkdDE3NjM2MzQ5MjEkajYwJGwwJGgw#" target="_blank" rel="noopener"&gt;Bayesian Optimization&lt;/A&gt;&amp;nbsp;is quite limited at the moment, it may be hard to answer your questions.&lt;BR /&gt;It seems the fitting/validation strategies are different between classical GP and GP from OB :&amp;nbsp;&lt;LI-MESSAGE title="Bayesian Optimization GP vs standalone GP" uid="914225" url="https://community.jmp.com/t5/Discussions/Bayesian-Optimization-GP-vs-standalone-GP/m-p/914225#U914225" discussion_style_icon_css="lia-mention-container-editor-message lia-img-icon-forum-thread lia-fa-icon lia-fa-forum lia-fa-thread lia-fa"&gt;&lt;/LI-MESSAGE&gt;. The Leave-One-Out strategy is used in both platforms, but the aggregation of results may be different: Jackknife for classical GP vs. classical LOO for GP-OB.&lt;/P&gt;
&lt;P&gt;I also find frustrating to not be able to save prediction formula from the OB model. I did find a workaround to approximate the model used in the OB platform : As you have access to parameter values from the OB platform (in the Gaussian Process model report), I relaunched the classical Gaussian Process platform but enforcing the theta value found in the OB model. This is not perfect as the intercept won't be the same between the two models, but at least I'm able to approximate the prediction model found in OB platform :&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Victor_G_0-1763638495608.png" style="width: 400px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/87407iF68C9F84C8C0FA2B/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Victor_G_0-1763638495608.png" alt="Victor_G_0-1763638495608.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Hope this first answer may help you,&lt;/P&gt;</description>
      <pubDate>Thu, 20 Nov 2025 11:35:07 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Bayesian-optimization-exporting-the-underlying-model/m-p/914474#M107451</guid>
      <dc:creator>Victor_G</dc:creator>
      <dc:date>2025-11-20T11:35:07Z</dc:date>
    </item>
    <item>
      <title>Re: Bayesian optimization-exporting the underlying model</title>
      <link>https://community.jmp.com/t5/Discussions/Bayesian-optimization-exporting-the-underlying-model/m-p/914528#M107455</link>
      <description>&lt;P&gt;Thanks for your thoughts. Although it still would be good to have/see the underlying model, I suppose that, provided that I added additional data (as a form of validation), reran the model (or generated a new one), and neither the fit nor optimal solution changed much, I would conclude that the model is good. If instead I added some new data and the new solution was totally different, I would conclude the old model had issues and to accept the new one (which would be generated regardless). But I do wish I could have more than one sample held back, though maybe if I do enough iterations, I'll have a better sense of the actual validation R2(?).&lt;/P&gt;</description>
      <pubDate>Thu, 20 Nov 2025 14:49:15 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Bayesian-optimization-exporting-the-underlying-model/m-p/914528#M107455</guid>
      <dc:creator>abmayfield</dc:creator>
      <dc:date>2025-11-20T14:49:15Z</dc:date>
    </item>
  </channel>
</rss>

