cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Discussions

Solve problems, and share tips and tricks with other JMP users.
Choose Language Hide Translation Bar
abmayfield
Level VI

Bayesian optimization-exporting the underlying model

Hello, 

    I have enjoyed playing around with the new Bayesian optimization platform. However, I want to compare the results to those of some machine learning models. I see in the output that you DO get a "leave-on-out" R2, but is that truly based on a single random row? Or does some "behind the scenes" breaking of the dataset into training and validation portions going on? Can I know which row(s) was chosen as the holdback?

Also, is it possible to see the actual model? I have used the "save all model fits" and "save script for next iteration," but I'm not sure if these are the complete models. I want to see how well the model predicts new data, though I realize this may not be exactly what this platform is meant to do.

Alternatively, could I "load" the optimal GP model into Gaussian Process? I can't see an easy way to do that, especially since I have multiple thetas, but if I can run the optimal GP generated in the GP platform, this might actually solve all of these problems/questions!

Anderson B. Mayfield
2 REPLIES 2
Victor_G
Super User

Re: Bayesian optimization-exporting the underlying model

Hello @abmayfield,

As the documentation on Bayesian Optimization is quite limited at the moment, it may be hard to answer your questions.
It seems the fitting/validation strategies are different between classical GP and GP from OB : Bayesian Optimization GP vs standalone GP. The Leave-One-Out strategy is used in both platforms, but the aggregation of results may be different: Jackknife for classical GP vs. classical LOO for GP-OB.

I also find frustrating to not be able to save prediction formula from the OB model. I did find a workaround to approximate the model used in the OB platform : As you have access to parameter values from the OB platform (in the Gaussian Process model report), I relaunched the classical Gaussian Process platform but enforcing the theta value found in the OB model. This is not perfect as the intercept won't be the same between the two models, but at least I'm able to approximate the prediction model found in OB platform :

Victor_G_0-1763638495608.png

Hope this first answer may help you,

Victor GUILLER

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)
abmayfield
Level VI

Re: Bayesian optimization-exporting the underlying model

Thanks for your thoughts. Although it still would be good to have/see the underlying model, I suppose that, provided that I added additional data (as a form of validation), reran the model (or generated a new one), and neither the fit nor optimal solution changed much, I would conclude that the model is good. If instead I added some new data and the new solution was totally different, I would conclude the old model had issues and to accept the new one (which would be generated regardless). But I do wish I could have more than one sample held back, though maybe if I do enough iterations, I'll have a better sense of the actual validation R2(?).

Anderson B. Mayfield

Recommended Articles