Bayesian Optimization, aka BayesOpt, is a new platform coming to JMP Pro 19. With it, you can use your existing data to fit a model and then use that model to intelligently decide where to sample next. These new observations are used to update the model and find even better samples. This process can start with a minimal number of runs, which means wasting fewer resources on suboptimal samples and spending more time exploring new opportunities.

Prefer video? Watch it here.
Solve problems with ease
BayesOpt has a built-in auto mode for your “standard” problems. Figure out where to look next without getting lost in the details.

Widely applicable
Alternatively, a suite of interactive customization options is available for more complex problems or for when you want to dig into the details yourself.

Trust and transparency
BayesOpt results are understandable and interpretable. The platform automatically detects and records how and why samples are selected to keep track of your exploration. You can also edit these reasons manually within the platform to customize your documentation.

Additionally, the platform is built off the familiar Prediction profiler and its built-in desirability functions, allowing you to hit the ground running. And you don't have to accept the platform’s suggestions at face value! Beyond the automatic suggestions, BayesOpt provides credible and interpretable metrics for you to make well-informed decisions based on its recommendations.

Track your progress
Get immediate feedback on how well your project is progressing across sample iterations with the built-in desirability run chart.

Expand your options without starting over
No industry stays stagnant for long. BayesOpt lets you keep your options open, even if those options change after you start your product development process. For continuous variables, use the profiler axes to expand the range you want to explore directly in the platform before generating a new candidate set to choose from. For more complex changes to factors and constraints, load any custom candidate set into the platform.

In addition to choosing sample points from a candidate set, you can use the profiler to explore predictions and select sample points from there.

Sneak peek
To demonstrate BayesOpt's intuitive workflow, let's take a look at a small example. Below is a re-simulated version of the Tiretread sample from our sample index, included with your copy of JMP. I’m going to use the BayesOpt platform to maximize the Abrasion response while matching targets on Elong and Hardness. Because BayesOpt makes so much use of profiler desirabilities, I have set them prior to running the platform.

Launch the platform from the analysis menu to get started.

BayesOpt has a typical launch dialog for a modeling platform in JMP. You only need to specify the Y’s and X’s at this point, because we are at the beginning of the BayesOpt process. Note that you can select the size of the batch here, which is the number of rows added by the auto mode. The default is set at one, but for this first demo, I am going to set the batch sizes at two.

The BayesOpt platform is organized into tabs. Start with the Model Summary tab that has important information about all the responses. The first thing we should look at is at the summary of all the RSqs. The auto mode algorithm looks at the RSqs to determine if the models are good enough to work with. You can see below that the RSq for the Abrasion model is below our standards and is highlighted red. Below that, we see the observed desirability run chart, which keeps a running score of our progress. In this case we see that the initial five runs were not very good. Over successive iterations, we want to see these values trending up.

The rest of this tab is a fairly standard JMP platform with actual by predicted plots and a profiler. The button at the bottom allows you to see the traces of the prediction standard errors. Now let's move on to the Batch Selection tab. The platform's auto selection has found the two runs requested in the launch dialog. By default, it assumes you want to stay in auto mode and keeps the report very compact.
Because we have a response with a low RSq, the platform has chosen to stay in the Space Filling Exploration phase. It’s using a space filling criterion (MaxPro Space Filling), which helps Gaussian process models fit better.

From here, click Make Table, and the runs will be added to the table automatically, and we are done with this round. Returning to the table we see that the two runs have been added, along with an Iteration column and Reason Added column that keeps track of why those runs are there.

I use my simulator to create new responses and launch the platform again.

Once the platform has been relaunched, we can see that the two space filling runs have led to a better fit for the responses but the new responses aren’t meaningfully better than the original five.

Moving to the Batch Selection tab, we see that the two runs aren’t space filling runs, as we have moved from the Space Filling phase to the Model Refinement phase. We can click Make Table and repeat the process.

Skipping ahead two batches, we see again that the models are all still good, and our run chart shows that while the second batch didn’t lead to significantly improved responses, the third one did!

Looking at the Batch Selection tab, we see it is replicating the best training run we’ve seen so far, meaning we have moved on to the third stage of the Bayesian Optimization algorithm, the Confirm/Challenge phase. In this stage, we are first seeking to confirm that we have the best run, while also finding nearby runs that challenge that optima. We could stop here, but we’ll take another step to see what happens.

After taking one more iteration, we see that we are getting essentially the same result, and the auto mode algorithm is staying in Confirm/Challenge mode. We can never rule out the possibility that there is more that can be improved, but this result suggests that we may have found our best settings already.


Thank you for exploring the new Bayesian Optimization platform. We look forward to sharing more features and details of this platform as we get closer to release and are working on many exciting improvements for future releases.
Further reading
Frazier, P. I. (2018). A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.