cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Check out the JMP® Marketplace featured Capability Explorer add-in
Choose Language Hide Translation Bar
Incr_ch22
Level II

Response Surface Analysis: Finding optimised settings for process

So I designed a RSM with JMP after having done the screening. I have 5 Input Variables and 5 output variables which are measured on the produced samples. So now I have those results of the responses and I have run the model using Model>Run Script. This of course takes me to the Least Squares fit and I went to the Prediction Profiler and Contour Profiler and there are already a suggested set of input parameters based on how I had designed the output responses i.e. whether it had to be maximised or minimised or matched to a target e.t.c. So my questions are in two parts;

 

1. I realised that on one of the responses (density) I had selected the maximise option instead of putting a target. The density of this material is about 1 g/cm3 and my results from experiments are in an acceptable range (0,7-09)range but the predicted output (because i had selected maximise) are being shown in the order  of 1000s. And also the RSME is about 360! Is there a way for me to go back and adjust this in my RSM (and put limits) and evaluate the RSM again?

 

2. After one gets the desired settings by using the Prediction profiler, how is the model then generated or how does one come up with an equation to describe the model? Also when it comes to testing the model to show its suitablity, do I then have to use just this one suggested set of parameters and produce about 10 samples and evaluate if the result is as the predicted one for all of them?

 

I am hoping that someone here has had similar experiences and is able to help. I would also appreaciate being pointed to the right resources.

Thank you. 

1 ACCEPTED SOLUTION

Accepted Solutions
Victor_G
Super User

Re: Response Surface Analysis: Finding optimised settings for process

Hi @Incr_ch22,

Great, so it was more a problem of data quality (outliers) than a problem of modeling.
A comment about the modeling: when you click on "Model" to create the analysis, please check the option "Fit Separately" (below personality and emphasis, on the right) when you have several responses before you launch the platform : you will have to sort and eliminate the terms for each response, but it will enable you to have custom models for each of your responses, which can help increase precision of your model (see screenshot attached for comparison or file with script "Model_fit-separately")).

 

Concerning your question, by going into "Set Desirability", you can specify different importance to your responses ; in your case, if density is very important compared to other responses, you can increase its importance. It might be difficult to have all responses sorted from high to low desirability, since some of your responses may have "conflicting interests/factors", but you can sort the output table by selecting the column "Desirability", right-click on it, click on "Sort" and then "Descending". You'll then have you simulated experiments sorted from highest overall desirability to lowest overall desirability (and if needed, you can sort on other specific response columns in the same way).

 

For the validation experiments, I would look at the predicted optimum and the predicted responses values, to check accuracy of the model. Other points can be selected as well, depending on your goal (check if prediction variance is homogeneous in the design or check if small variations from the optimum can still be acceptable for your process).

 

I'm not sure what you imply by "find something similar for the entire process" ? You can still save every individual responses prediction formula in order to map/predict the whole process ?

If you want to save all responses formulas in just one click, press CTRL + click on red triangle of one of the responses, "Save Columns", and then "Prediction Formula" (and StdErr Pred Formula to get confidence intervals for each response prediction formula). 

 

Hope it helps you,

Victor GUILLER
L'Oréal Data & Analytics

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)

View solution in original post

10 REPLIES 10
Victor_G
Super User

Re: Response Surface Analysis: Finding optimised settings for process

Hi @Incr_ch22,

 

  1. You can CTRL + left click on you desirability functions in your profiler to change goals and range. You can also access it via the red triangle, choose submenu "Optimization and Desirability" and then choose "Set Desirability". See Prediction Profiler Options (jmp.com).
    If the RMSE (Root Mean Squared Error) is that big for this density answer, changing the desirability won't change the RMSE of the model. We don't have your file and results, but there may something to do about the modeling: changing the type of model, include/remove certain terms, etc ...
  2. For each response, you can click on the red triangle of the response, then "Estimates", and then "Show Prediction Expression" : Show Prediction Expression (jmp.com). You can also save the formulas directly in your datatable with the red triangle, then "Save Columns", then "Prediction Formula" (and "StdErr Pred Formula" to get the confidence ranges) : Save Columns (jmp.com) 
    For the validation of the DoE, there may be several ways to confirm predicted optimum, but it's always a good practice to run validation experiments, and compare the values with the predicted values from the model. You can also use the simulator to check if your response is robust enough to small variations of your inputs : Simulator (jmp.com)

 

I hope this first answer will help you,

Victor GUILLER
L'Oréal Data & Analytics

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)
Incr_ch22
Level II

Re: Response Surface Analysis: Finding optimised settings for process

Thank you very much @Victor_G . I tried to use the desirabilty function and its working well but like you said this wont change the RMSE of model :(. I am not sure where it is I went wrong for my density. I have also tried to remove the terms with p > 0.05 in the effect summary but thats not helping with the very high predicted density values. I do intend on running validation experiments for my process.

 

Thank you @P_Bartell for your input. The variations thus far are in an acceptable range, well except for this density.

 

Could you please kindly have a look at my model and analysis and let me know your thoughts? 

 

It seems I am having issues uploading my file: 

  • The attachment's  content type  does not match its file extension and has been removed. This is the notification I am getting. What does that mean?
Victor_G
Super User

Re: Response Surface Analysis: Finding optimised settings for process

Hello @Incr_ch22,

 

If you don't manage to have a "good enough" model with the factors/variables you have (and different power/interactions with and between these factors), perhaps there may be missing variables/factors (not present in your DoE) that may have a better predictive capacity, or outliers biasing/noising the model.

 

Concerning your file, make sure that the end of the file name match the format/type/extension. For example, Big_Class.jmp is a JMP datatable. A project or journal will have a different extension. Have you tried saving your file again and specifying the extension ? Or trying to save it under a different format (like Excel format) ?

 

 

Victor GUILLER
L'Oréal Data & Analytics

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)
Incr_ch22
Level II

Re: Response Surface Analysis: Finding optimised settings for process

So I managed to identify where the error was in terms of the density values. It turns out it was a case of the misplaced comma in one of the entries. I have sorted that out and the RMSE value is now 0,02. I have gone back to the Prediction Profiler and set the ranges for the desirability. Also when I click Prediction Profiler > Output Grid Table; I get the table with the predicted results. My question is; Is there an automatic way for me to analyse or at least group the table such that I have high overall desirability at the top going down to low desirability? I have some values with overall good desirabilty but when I look at the predicted density values,  they are not ideal. In my case this factor (density) plays an important role as I am trying to have samples which are not too porous.

For the validation experiments, can I then select some parameter runs from this table (Maybe 5) and compare the results with the predicted ones?

 

@Victor_G the tip about the Prediction Expression and formula worked for the singular responses. I however could not find something similar for the entire process?

@P_Bartell I have attached two pictures of the effect summary and the Prediction Profiler.

Victor_G
Super User

Re: Response Surface Analysis: Finding optimised settings for process

Hi @Incr_ch22,

Great, so it was more a problem of data quality (outliers) than a problem of modeling.
A comment about the modeling: when you click on "Model" to create the analysis, please check the option "Fit Separately" (below personality and emphasis, on the right) when you have several responses before you launch the platform : you will have to sort and eliminate the terms for each response, but it will enable you to have custom models for each of your responses, which can help increase precision of your model (see screenshot attached for comparison or file with script "Model_fit-separately")).

 

Concerning your question, by going into "Set Desirability", you can specify different importance to your responses ; in your case, if density is very important compared to other responses, you can increase its importance. It might be difficult to have all responses sorted from high to low desirability, since some of your responses may have "conflicting interests/factors", but you can sort the output table by selecting the column "Desirability", right-click on it, click on "Sort" and then "Descending". You'll then have you simulated experiments sorted from highest overall desirability to lowest overall desirability (and if needed, you can sort on other specific response columns in the same way).

 

For the validation experiments, I would look at the predicted optimum and the predicted responses values, to check accuracy of the model. Other points can be selected as well, depending on your goal (check if prediction variance is homogeneous in the design or check if small variations from the optimum can still be acceptable for your process).

 

I'm not sure what you imply by "find something similar for the entire process" ? You can still save every individual responses prediction formula in order to map/predict the whole process ?

If you want to save all responses formulas in just one click, press CTRL + click on red triangle of one of the responses, "Save Columns", and then "Prediction Formula" (and StdErr Pred Formula to get confidence intervals for each response prediction formula). 

 

Hope it helps you,

Victor GUILLER
L'Oréal Data & Analytics

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)
Incr_ch22
Level II

Re: Response Surface Analysis: Finding optimised settings for process

Thank you very much for the explanations and tips on fitting the model separately, I am definitely going to apply that from now on. Upon retrospection of my reasoning behind " wanting one equation for the entire process", I realised that it was flawed as I have 5 (Ys) outputs that I am measuring against. Your response was very helpful in making me understand some things.

 

@Mark_Bailey Thank you for your input; thats exactly what was on my mind but as a DOE and JMP amateur I was not too sure.

 

@P_Bartell Thats a great statement there;it goes on to show just how powerful DOE and statistics can be!

Thank you all again!

 

P_Bartell
Level VIII

Re: Response Surface Analysis: Finding optimised settings for process

You didn't attach any files for me to look at. And a warning...don't bother attaching a JMP data table since I don't have JMP on my computer so I can't do any analysis...but if you've got some non JMP files/pictures, that you can attach that contain analysis platform report output I'll take a look and offer whatever thoughts or observations I may have.

 

P_Bartell
Level VIII

Re: Response Surface Analysis: Finding optimised settings for process

The only thing I suggest over and above the advice from @Victor_G to do BEFORE running your validation trials is to examine the residual plots in the area of interest for the optimal process settings. This will give you some idea of the minimum variation you can expect from the validation trials. If this amount of variation is troublesome from a practical point of view, moving on to validation trials might be nothing more than an academic exercise.

Re: Response Surface Analysis: Finding optimised settings for process

If you want to validate the model and not just the optimal factor settings, select one or more sets of factor settings that predict poor responses and see if they confirm.