Choose Language Hide Translation Bar
Finding the best process operating conditions using 2 optimisation approaches

Scientists and engineers often need to find the best settings or operating conditions for their processes or products to maximise yield or performance. I will show you how the optimisation capabilities in JMP can help you work out the best settings to use. Somewhat surprisingly, the particular settings that are predicted to give the highest yield or best performance will not always be the best place to operate that process in the long run. Most processes and products are subject to some degree of drift or variation, and the best operating conditions need to take account of that.

 

You may be familiar with maximise desirability  in the context of process optimisation, but simulation experiment is a little known gem within the JMP Prediction Profiler. If you are trying to find the most robust factor settings for a process, then you need to know about simulation experiment. I will show you how useful simulation experiment can be and how it goes beyond what maximise desirability can achieve.

 

The goal of most designed experiments is to identify and quantify how much particular factors or inputs affect the responses or outputs from that process. A secondary goal is often to use this understanding to choose factor settings that will give the most desirable response or output values.

 

Once we have run a designed experiment and built a model that describes the relationship between the factors and responses, we can then use that model to find the optimum factor settings that will give the most satisfactory values for the responses we are interested in. There are several different ways of performing this optimisation process in JMP and these methods are described in detail in chapters 8 and 9 of the Profilers book, which can be found under Help > Books within JMP.

 

I want to focus on two of these methods: maximise desirability and simulation experiment as they can in some situations lead to very different solutions. The example I am going to use to illustrate this is a 13-run definitive screening design (DSD) with five factors and one response. These three-level definitive screening designs are a new class of screening designs that can very efficiently allow you to identify the important main effects. They can also allow you to build a full response surface model with two-factor interactions and quadratic terms if there are only three active main effects. Bradley Jones, the inventor of these designs, describes them in more detail in his excellent blog post on the subject.

 

A 13-run DSD is shown below.

 

DSD

 

The quickest and easiest way to build a model for this experiment is to run the built-in Screening script highlighted in blue in the top left-hand panel of the JMP data table. The model we obtain contains three main effects – Modifier, Temperature and Time – and a two-factor interaction term between Modifier and Temperature, plus a quadratic term for Time. The Prediction Profiler for this model is shown below. I have also turned on the Monte Carlo simulator and the Contour Profiler.

 

Initial 2

 

Using the initial factor settings (the mid-point of the three-dimensional factor space), we see the critical output is predicted to have a value of 75.3, and we have 50 percent of the points from the Monte Carlo simulation below the lower spec limit. The contour plot shows us sitting almost exactly on that lower spec limit.

 

When we move to the settings determined by maximise desirability (below), the critical output increases to 82.1 and the percentage of points below the lower spec limit drops to 4.6 percent. The contour plot shows that we are now sitting in the top left-hand corner where the highest value of the Critical Output is predicted to be.

 

Max desire 2

 

If we now look at the settings determined by simulation experiment (below), we have moved to the top right-hand corner of the contour plot where the contour lines are farther apart. We haven’t quite achieved as high a predicted value for the critical output. It is now 79.8, but the percentage of points below the lower spec limit is substantially reduced to 1.8 percent. When we compare the settings for the maximise desirability solution vs. the simulation experiment solution, we can see that the main difference is that the simulation experiment has chosen a high setting for Modifier, which exploits the two-factor interaction between Modifier and Temperature and makes the Critical Output insensitive to changes in Temperature (the Temperature line in the Profiler is now flat). The Critical Output distribution becomes much tighter with considerably fewer points out of spec, leading to a more robust process.

 

Sim expt 2

 

Let’s take a look at how simulation experiment found this more robust solution. Simulation experiment explores the factor space in a different way to maximise desirability. Rather than searching the factor space for the settings that give the most desirable value for the critical output, it focuses instead on searching the factor space to find the settings that minimise the Defect Rate calculated by the Monte Carlo simulation. It is still using the same model as maximise desirability, but it now uses that model to run a series of Monte Carlo simulations to determine how the Defect Rate varies within the factor space. It uses a Space Filling design to do this and models the defect rate using a Gaussian process. To launch simulation experiment, go into the red-triangle menu in the Simulator outline within the Prediction Profiler.

 

Sim expt dialog 2

 

When you run simulation experiment, it performs a Monte Carlo simulation at each of the factor settings specified by the Space filling design and records the defect rate obtained for each of those simulation runs to a table. That table is shown below. Each row represents a Monte Carlo simulation run with different factor settings. The table also contains a built-in script that will model the Defect Rate (it actually models the log defect rate since that is a better response to use). We can then find the optimum settings that minimise the defect rate (using maximise desirability in the defect rate Profiler) and then save those settings to the original Prediction Profiler for the critical output.

 

Gaussian process 2

 

 

 

To see the simulation experiment demonstrated in more detail, watch this video:

 

The key difference between maximise desirability and simulation experiment is that maximise desirability doesn’t take account of the natural variation in the factors when choosing the optimum factor settings. Simulation experiment takes account of that natural variation in the factor settings and finds the most robust settings to minimise the defect rate. The difference is nicely illustrated by a drawing that my daughter drew for me, illustrating how JMP can make complex problems simple.

 

Drawing

Article Labels

    There are no labels assigned to this post.

20 Comments
Visitor

Christel Kronig wrote:

Very useful and interesting post thank you. I recreated your data table to look at this in JMP. I had not used the simulation part before but I managed to successfully recreate the output. I will be using this for future DOEs!

Staff

Robert Anderson wrote:

Thanks Christel. I'm glad you like it. Good luck with those future DoE's

Visitor

Marcello wrote:

This post is very interesting and clearly illustrates the problem and the way JMP can manage it. Thank you. The drawing is very telling!

Marcello.

Staff

Robert Anderson wrote:

Thanks Marcello, I'm glad you find it interesting. I'll tell my daughter that you liked her drawing.

Level II

Hi Robert, I have been watching your videos lately and learning a ton. I've noticed you have a useful "Images Add-in" where you have a vault of useful images to portray your ideas. I'm wondering if the Images Add-In is available anywhere for me to download? Thanks.

Staff

Thanks Jorge, I'm glad you have found the videos helpful. My 'Images' menu is not actually an Add-in, it is just a customised menu with links to journals that contain images or simple explanations of certain things. I think the only thing in that menu I have used in a recorded JMP demo is the 'Maximise desirability vs. Simulation experiment' cartoon. I've posted that in the discussion area in response to your question. If there is anything else you saw in the menu and would like, just let me know. 

Staff

The YouTube video link above appears to be corrupted. This is the correct link: https://www.youtube.com/watch?v=9KZ7Ns3CQzU

I also did a Discovery talk on this topic last year in Amsterdam which was recorded. Here is the link to that recording:

https://community.jmp.com/t5/Discovery-Summit-Europe-2016/Robust-Optimisation-of-Processes-and-Produ...

Level I

thank you very much for detailed info.

But i observed a strange situation. when i run simulation experiments for 3 or 4 times, JMP suggests me diffrent factor levels with smaller defect rates. The more i make simulation exp., the smaller defect rate JMP suggests.

I read a lot but i could not understand why it happpens. How can i trust JMP that simulation experiment gives me globally maximum desirable factor levels?

 

Staff

Thanks Ellla, what I think might be happening is that when you run simulation experiment the default setting for 'Portion of factor space to experiment with' is 0.5. This means that simulation experiment only explores half of the full factor range around the current profiler factor settings. So this means that you may not get to the factor settings that give the lowest defect rate the first time you run simulation experiment but if you run it again the next time you will move closer again to the settings that give the lowest defect rate. If you change the setting for 'Portion of factor space to experiment with' to 1 then simulation experiment will explore the full factor range the first time you run simulation experiment  and you should find the settings that give the lowest defect rate. You could then run simulation experiment again with those factor settings and a reduced value for 'Portion of factor space to experiment with' if you wanted to try and get a even more precise estimate of what the optimum settings are that give the lowest defect rate. Let me know if that resolves the issue you are seeing. Robert

Level I

thank you very much for your quick return and interest.

I also doupt that protion level can cause this unstable solutions.

I have tried to make simulation by puting portion of factor space experimen 1 before and now.

But JMP gave  me a worse solution with higher defect rate compare to portion level=0.5.

I really do not understand why it is unstable. @markbailey @robert_anderson 

Staff

Hi Ella, that's interesting. What is the model you are running simulation experiment on? Is it a quadratic polynomial model with 2-factor interactions or something more complicated? Can you share an anonymized version of the data table and model? I think I would have to see the data and model  to have a chance of figuring out what the problem might be. You can email me at robert.anderson@jmp.com if that's easier, Robert

Staff

Hi Ella, one further question, What is the R-square for your model? Robert

Level I

I try to optimize 4 models.

3 of which is predicted on MATLAB since i used fgls, then i create these formulas on JMP.

The fourth one is neural network.

 

That's why i do not know R^2 ' s of 3 fgls models. Neural net work model R^2 is approximately %40 as far as i remember.

 

My models are like that (they have main effects and interractions) :

 

First one:

(((((((-9511) + 0.1 * :x1 + 0.23 * :x3 + 745 * :x4) - 369 * :x6 - 0.8 * :x7 - 4.2 * (
:x1 - 2521) * (:x4 - 13)) + 2.3 * (:x1 - 2521) * (:x5 - 13.32) + 2 * (:x2 - 1.36) * (
:x3 - 60.1)) - 23.8 * (:x3 - 60.1) * (:x6 - 0.82)) + 33535 * (:x4 - 13) * (:x5 - 13.32
) + 11650 * (:x4 - 13) * (:x6 - 0.82)) - 6088 * (:x5 - 13.32) * (:x6 - 0.82)) + 3.9 *
(:x5 - 13.32) * (:x7 - 5.4)

 

Second one:

(((31.4 * :x2 - 1.4 * :x3 - 199 * :x4) + 203 * :x5 + 0.3 * :x7 + 2.1 * (:x1 - 2521) *
(:x4 - 13) + 760 * (:x2 - 1.36) * (:x6 - 0.82) + 11 * (:x3 - 60) * (:x4 - 13) + 0.01
* (:x3 - 60) * (:x7 - 5.5)) - 4.1 * (:x4 - 13) * (:x7 - 5.5)) + 5015 * (:x5 - 13.3)
* (:x6 - 0.8)

 

third one:

 

(180 - 0.04 * :x1 - 2.3 * :x3) + 138.4 * :x6 + 0.3 * :x7 + 2.2 * (:x1 - 2522) * (:x4
- 13)

 

Last one:

-1.78381405723789 * TanH(
0.5 * (673.879493391115 + 0.00156828939441121 * :x1 + 2.15320544406296 * :x2
+0.11296315865742 * :x3 + -73.8380042960357 * :x4 + 22.4620052527818 * :x5 +
-34.1317569124232 * :x6 + 0.00576272662208232 * :x7)
)

 

random variables: X4,x5,x6

Rest is fixed variables.

 

i hope these will help

Level I

Hi Robert,

 

This is an extremely fascinating approach. Thank you for sharing. I'd like to know if it is possible to use this approach to determine the optimal process window for this this process. In other words, what are the ranges of the input variables that would result in the largest orthogonal (volume?) while ensuring the response is between two specification limits?

 

I am using the term "orthogonal" to imply that the process parameters will be listed as a table of parameter ranges, and any combination of the process parameters should result in a process output that is within the specification limits. I want to maximize the volume of the process window to allow for the most variability of the process parameters. 

 

Regards,

David

Staff

Hi David, I think that if I'm understanding what you are saying correctly that would be taking 'Simulation Experiment' to the 'next level'. With simulation experiment you have to specify how much variation you anticipate you will have in each factor setting.or in other words how well you can control each factor setting. I think what you are asking is how much could each factor setting vary by before it starts to substantially impacts the 'overall defect rate'. To do something like that in JMP at the moment, you would need to run a  series of 'simulation experiments' each with different amounts of variation in the factors to see what impact that would have on the 'overall defect rate'. To do that in an efficient way you would probably want to run a 'space filling' DOE and model the results using a Gaussian process. So I think what you are effectively  talking about would be a 'simulation experiment' on a 'simulation experiment'. You could then build a model for the 'overall defect rate' based on the SD (standard deviation) values in the original 'simulation experiment'. Something like that could possibly be scripted but  unfortunately it is probably beyond what I could do easily or have the time to work on. Maybe someone else will take up this challenge!

Level I

Thanks for the reply, Robert. You summarized what I am after quite well and gave me some interesting things to think about and play with. I'll gladly post any progress I make on this topic, but to your last point, if anyone in the community shares interest in this topic, I'd welcome the collaboration!

Staff

Hi David, I'm glad I understood your idea correctly. I agree that what you have suggested would be a useful addition and extension to 'Simulation Experiment'. Maybe you could post your idea to the 'JMP Wish List' section of the community. I actually suggested a few years ago that more people would use 'Simulation Experiment' if the workflow could be simplified and the simulation experiment optimization process could be accomplished in just 'one click' i.e. in a similar way to 'Maximize Desirability'. Unfortunately that never happened. If you agree, maybe you could make that suggestion too? Coming from a customer, it will probably receive greater priority. I look forward to seeing where you take this. If I can be of any assistance, let me know. I would love to spend more time on this but  unfortunately I have to keep focused on my 'day job'.

Staff

Anyone interested in this blog should also check out this 'JMP On Air' video  Simulation Experiment recorded a couple of months ago. The YouTube video above is about 5 years old. I also did a Discovery Talk on this topic a few years ago Robust Optimisation of Processes and Products by Using Monte Carlo Simulation Experiments

Level I

Thanks Robert for your links, these links are always help for my studies.

If you have other documents or videos about, let's say, defect profiler or topics under red triangle based on prediction profiler, i am very interested in to expole them. 

 

And hi David,

I also thougth about your problem in my study before. But i gave up since i have more important problem i need to solve fist. Bu acc. to David's expilanation, i see that there is no one click solution in JMP. If you have soltion for it, i also glad to hear and apply it.

  

Staff

Thanks Ella, a while back someone else albiruni81 was also interested in the idea that you and David talked about above and there was some discussion around the topic Finding the allowable limits for a factor to ensure low defect rates  It would certainly be good  if someone did come up with an automated way to explore or understand how much variation you can tolerate in each factor without the overall defect rate being seriously impacted. You can explore this 'manually' in an iterative way with existing JMP capabilities as the other thread discusses but a 'one click' solution would be great. You have both encouraged me to try and find some time to look at this myself. However, I suspect it may require greater scripting abilities than I currently possess but  I'm going to have a go. If someone else manages to come up with a solution faster that's fine with me.