cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
  • Register to attend Discovery Summit 2025 Online: Early Users Edition, Sept. 24-25.
  • New JMP features coming to desktops everywhere this September. Sign up to learn more at jmp.com/launch.
Choose Language Hide Translation Bar
Kasia_Dob
Staff
Introducing the Bayesian Optimization platform in JMP Pro

Bayesian Optimization, aka BayesOpt, is a new platform coming to JMP Pro 19. With it, you can use your existing data to fit a model and then use that model to intelligently decide where to sample next. These new observations are used to update the model and find even better samples. This process can start with a minimal number of runs, which means wasting fewer resources on suboptimal samples and spending more time exploring new opportunities. 

Kasia_Dob_0-1756474420240.png

Prefer video? Watch it here.

 

Solve problems with ease

BayesOpt has a built-in auto mode for your “standard” problems. Figure out where to look next without getting lost in the details.

Kasia_Dob_1-1756474420240.png

Widely applicable

Alternatively, a suite of interactive customization options is available for more complex problems or for when you want to dig into the details yourself.

Kasia_Dob_2-1756474420240.png

Trust and transparency

BayesOpt results are understandable and interpretable. The platform automatically detects and records how and why samples are selected to keep track of your exploration. You can also edit these reasons manually within the platform to customize your documentation.

Kasia_Dob_3-1756474420240.png

Additionally, the platform is built off the familiar Prediction profiler and its built-in desirability functions, allowing you to hit the ground running. And you don't have to accept the platform’s suggestions at face value! Beyond the automatic suggestions, BayesOpt provides credible and interpretable metrics for you to make well-informed decisions based on its recommendations.

Kasia_Dob_4-1756474420240.png

Track your progress

Get immediate feedback on how well your project is progressing across sample iterations with the built-in desirability run chart.

Kasia_Dob_5-1756474420241.png

Expand your options without starting over

No industry stays stagnant for long. BayesOpt lets you keep your options open, even if those options change after you start your product development process. For continuous variables, use the profiler axes to expand the range you want to explore directly in the platform before generating a new candidate set to choose from. For more complex changes to factors and constraints, load any custom candidate set into the platform.

Kasia_Dob_6-1756474420241.png

In addition to choosing sample points from a candidate set, you can use the profiler to explore predictions and select sample points from there.

Kasia_Dob_7-1756474420241.png

Sneak peek

To demonstrate BayesOpt's intuitive workflow, let's take a look at a small example. Below is a re-simulated version of the Tiretread sample from our sample index, included with your copy of JMP. I’m going to use the BayesOpt platform to maximize the Abrasion response while matching targets on Elong and Hardness. Because BayesOpt makes so much use of profiler desirabilities, I have set them prior to running the platform.

Kasia_Dob_8-1756474420241.png

 Launch the platform from the analysis menu to get started.

Kasia_Dob_9-1756474420241.png

BayesOpt has a typical launch dialog for a modeling platform in JMP. You only need to specify the Y’s and X’s at this point, because we are at the beginning of the BayesOpt process. Note that you can select the size of the batch here, which is the number of rows added by the auto mode. The default is set at one, but for this first demo, I am going to set the batch sizes at two.

Kasia_Dob_10-1756474420241.png

The BayesOpt platform is organized into tabs. Start with the Model Summary tab that has important information about all the responses. The first thing we should look at is at the summary of all the RSqs. The auto mode algorithm looks at the RSqs to determine if the models are good enough to work with. You can see below that the RSq for the Abrasion model is below our standards and is highlighted red. Below that, we see the observed desirability run chart, which keeps a running score of our progress. In this case we see that the initial five runs were not very good. Over successive iterations, we want to see these values trending up.

Kasia_Dob_1-1756824365511.png

 

 

The rest of this tab is a fairly standard JMP platform with actual by predicted plots and a profiler. The button at the bottom allows you to see the traces of the prediction standard errors. Now let's move on to the Batch Selection tab. The platform's auto selection has found the two runs requested in the launch dialog. By default, it assumes you want to stay in auto mode and keeps the report very compact.

Because we have a response with a low RSq, the platform has chosen to stay in the Space Filling Exploration phase. It’s using a space filling criterion (MaxPro Space Filling), which helps Gaussian process models fit better.

Kasia_Dob_12-1756474420241.png

From here, click Make Table, and the runs will be added to the table automatically, and we are done with this round. Returning to the table we see that the two runs have been added, along with an Iteration column and Reason Added column that keeps track of why those runs are there.

Kasia_Dob_13-1756474420241.png

I use my simulator to create new responses and launch the platform again.

Kasia_Dob_14-1756474420241.png

Once the platform has been relaunched, we can see that the two space filling runs have led to a better fit for the responses but the new responses aren’t meaningfully better than the original five.

Kasia_Dob_15-1756474420241.png

Moving to the Batch Selection tab, we see that the two runs aren’t space filling runs, as we have moved from the Space Filling phase to the Model Refinement phase. We can click Make Table and repeat the process.

Kasia_Dob_16-1756474420242.png

Skipping ahead two batches, we see again that the models are all still good, and our run chart shows that while the second batch didn’t lead to significantly improved responses, the third one did!

Kasia_Dob_17-1756474420242.png

Looking at the Batch Selection tab, we see it is replicating the best training run we’ve seen so far, meaning we have moved on to the third stage of the Bayesian Optimization algorithm, the Confirm/Challenge phase. In this stage, we are first seeking to confirm that we have the best run, while also finding nearby runs that challenge that optima. We could stop here, but we’ll take another step to see what happens.

Kasia_Dob_18-1756474420242.png

After taking one more iteration, we see that we are getting essentially the same result, and the auto mode algorithm is staying in Confirm/Challenge mode. We can never rule out the possibility that there is more that can be improved, but this result suggests that we may have found our best settings already.

Kasia_Dob_19-1756474420242.png

Kasia_Dob_20-1756474420242.png

Thank you for exploring the new Bayesian Optimization platform. We look forward to sharing more features and details of this platform as we get closer to release and are working on many exciting improvements for future releases.

 

Further reading

Frazier, P. I. (2018). A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811.

Last Modified: Sep 12, 2025 11:20 AM