Subscribe Bookmark
robert_anderson

Joined:

Jul 10, 2014

Finding the best process operating conditions using 2 optimisation approaches

Scientists and engineers often need to find the best settings or operating conditions for their processes or products to maximise yield or performance. I will show you how the optimisation capabilities in JMP can help you work out the best settings to use. Somewhat surprisingly, the particular settings that are predicted to give the highest yield or best performance will not always be the best place to operate that process in the long run. Most processes and products are subject to some degree of drift or variation, and the best operating conditions need to take account of that.

You may be familiar with maximise desirability  in the context of process optimisation, but simulation experiment is a little known gem within the JMP Prediction Profiler. If you are trying to find the most robust factor settings for a process, then you need to know about simulation experiment. I will show you how useful simulation experiment can be and how it goes beyond what maximise desirability can achieve.

The goal of most designed experiments is to identify and quantify how much particular factors or inputs affect the responses or outputs from that process. A secondary goal is often to use this understanding to choose factor settings that will give the most desirable response or output values.

Once we have run a designed experiment and built a model that describes the relationship between the factors and responses, we can then use that model to find the optimum factor settings that will give the most satisfactory values for the responses we are interested in. There are several different ways of performing this optimisation process in JMP and these methods are described in detail in chapters 8 and 9 of the Profilers book, which can be found under Help > Books within JMP.

I want to focus on two of these methods: maximise desirability and simulation experiment as they can in some situations lead to very different solutions. The example I am going to use to illustrate this is a 13-run definitive screening design (DSD) with five factors and one response. These three-level definitive screening designs are a new class of screening designs that can very efficiently allow you to identify the important main effects. They can also allow you to build a full response surface model with two-factor interactions and quadratic terms if there are only three active main effects. Bradley Jones, the inventor of these designs, describes them in more detail in his excellent blog post on the subject.

A 13-run DSD is shown below.

DSD

The quickest and easiest way to build a model for this experiment is to run the built-in Screening script highlighted in blue in the top left-hand panel of the JMP data table. The model we obtain contains three main effects – Modifier, Temperature and Time – and a two-factor interaction term between Modifier and Temperature, plus a quadratic term for Time. The Prediction Profiler for this model is shown below. I have also turned on the Monte Carlo simulator and the Contour Profiler.

Initial 2

Using the initial factor settings (the mid-point of the three-dimensional factor space), we see the critical output is predicted to have a value of 75.3, and we have 50 percent of the points from the Monte Carlo simulation below the lower spec limit. The contour plot shows us sitting almost exactly on that lower spec limit.

When we move to the settings determined by maximise desirability (below), the critical output increases to 82.1 and the percentage of points below the lower spec limit drops to 4.6 percent. The contour plot shows that we are now sitting in the top left-hand corner where the highest value of the Critical Output is predicted to be.

Max desire 2

If we now look at the settings determined by simulation experiment (below), we have moved to the top right-hand corner of the contour plot where the contour lines are farther apart. We haven’t quite achieved as high a predicted value for the critical output. It is now 79.8, but the percentage of points below the lower spec limit is substantially reduced to 1.8 percent. When we compare the settings for the maximise desirability solution vs. the simulation experiment solution, we can see that the main difference is that the simulation experiment has chosen a high setting for Modifier, which exploits the two-factor interaction between Modifier and Temperature and makes the Critical Output insensitive to changes in Temperature (the Temperature line in the Profiler is now flat). The Critical Output distribution becomes much tighter with considerably fewer points out of spec, leading to a more robust process.

Sim expt 2

Let’s take a look at how simulation experiment found this more robust solution. Simulation experiment explores the factor space in a different way to maximise desirability. Rather than searching the factor space for the settings that give the most desirable value for the critical output, it focuses instead on searching the factor space to find the settings that minimise the Defect Rate calculated by the Monte Carlo simulation. It is still using the same model as maximise desirability, but it now uses that model to run a series of Monte Carlo simulations to determine how the Defect Rate varies within the factor space. It uses a Space Filling design to do this and models the defect rate using a Gaussian process. To launch simulation experiment, go into the red-triangle menu in the Simulator outline within the Prediction Profiler.

Sim expt dialog 2

When you run simulation experiment, it performs a Monte Carlo simulation at each of the factor settings specified by the Space filling design and records the defect rate obtained for each of those simulation runs to a table. That table is shown below. Each row represents a Monte Carlo simulation run with different factor settings. The table also contains a built-in script that will model the Defect Rate (it actually models the log defect rate since that is a better response to use). We can then find the optimum settings that minimise the defect rate (using maximise desirability in the defect rate Profiler) and then save those settings to the original Prediction Profiler for the critical output.

Gaussian process 2

 

To see the simulation experiment demonstrated in more detail, watch this video:

[iframe title="YouTube video player" width="560" height="315" src="http://www.youtube.com/embed/9KZ7Ns3CQzU" frameborder="0" allowfullscreen]

The key difference between maximise desirability and simulation experiment is that maximise desirability doesn’t take account of the natural variation in the factors when choosing the optimum factor settings. Simulation experiment takes account of that natural variation in the factor settings and finds the most robust settings to minimise the defect rate. The difference is nicely illustrated by a drawing that my daughter drew for me, illustrating how JMP can make complex problems simple.

Drawing

4 Comments
Community Member

Christel Kronig wrote:

Very useful and interesting post thank you. I recreated your data table to look at this in JMP. I had not used the simulation part before but I managed to successfully recreate the output. I will be using this for future DOEs!

Robert Anderson wrote:

Thanks Christel. I'm glad you like it. Good luck with those future DoE's

Community Member

Marcello wrote:

This post is very interesting and clearly illustrates the problem and the way JMP can manage it. Thank you. The drawing is very telling!

Marcello.

Robert Anderson wrote:

Thanks Marcello, I'm glad you find it interesting. I'll tell my daughter that you liked her drawing.