cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Check out the JMP® Marketplace featured Capability Explorer add-in
Choose Language Hide Translation Bar
SimonFuchs
Level III

Response surface analyzes with multiple responses

Dear all, 

 

after running a big DoE with response surface (costum designer made) design, a couple of questions came up and it would be great, if someone could help me here? I am a rather fresh user of JMP and DoE but willing to learn.

 

1) We have measured different responses (radius, dispersity and melting point) with the same factors (salt, pH, sugar) . How is the significance calculated here when I evaluate how the set of factors affect different responses?

How is this combined with each other?
2) I found also main factors which are not significant, however these are in second order again significant with another main factor. Should I leave the second order terms in and throw out the non-significant main factors? How does this affect the model?
3) How can I see that I have reasonable data (other than lack of fit) and can evaluate it with a clear conscience? How can I see that I can trust the desirebility function?
4) Can I determine if response values correlate with each other (e.g. melting point with radius)?
5) I have measured a response surface design with the custom designer. Can I predict values that are outside the measured range? How do I see such trends in the data, which give me information that factors should be increased or decreased again to get optimal results.
6) How can I redefine the limits of the response values after I have finished the setup with custom designer menu?
7) Where can I learn how to evaluate multiple response values at the same time and how it differs from evaluating them separately.

 

Thanks a lot for all of your help!

3 ACCEPTED SOLUTIONS

Accepted Solutions
P_Bartell
Level VIII

Re: Response surface analyzes with multiple responses

The answers to all your questions could just about fill a basic book on DOE. My suggestion is pause with your work and complete all the DOE and modeling related modules within the free SAS "Statistical Thinking for Industrial Problem Solving" course. All of your questions are covered (plus many topics you didn't specifically ask about) there.

View solution in original post

Re: Response surface analyzes with multiple responses

  1. Select the best model separately for each response. Save the best model as a column formula. Combine them with Graph > Profilers > Profiler.
  2. Yes, it is highly recommended that you maintain 'model hierarchy' when using a linear model with your response data. So if you decide to include a term A*B, then you should include both the A term and the B term as well, regardless of their own significance.
  3. Two questions:
    1. Use residual analysis during the model selection of each response. Also, have you performed model verification runs yet?
    2. Your trust in the desirability function depends on your trust in your decision about the limits of the response and the target of the response.
  4. Sure.
  5. Two questions:
    1. You can predict beyond the factor ranges. This form of prediction is called extrapolation. It is risky. JMP will adjust confidence intervals for the predicted mean response, but JMP assumes that the model is valid beyond the factor range. Only you know if the assumption is reasonable.
    2. The Prediction Profiler is the best tool when you have multiple responses and factors.
  6. There are two methods:
    1. From the data table, use one of these methods:
      1. Click the asterisk icon after the response name in the columns panel and select Response Limits. Update the information and click OK.
      2. Select the response column, select Cols > Column Info, select Response Limits in the property list, update the information, and click OK.
    2. From the Prediction Profiler, hold the Control key with Windows or the Command key with Mac, click the graph of the desirability function, update the information and click OK. Then click the red triangle next to the Prediction Profiler and select Desirability > Save Desirability if you want to keep the update.
  7. Use the JMP Help to see information in the Profilers guide.

View solution in original post

statman
Super User

Re: Response surface analyzes with multiple responses

Based on your questioning, I would agree with Pete that you should study the subject before you waste resources. I mean starting with response surfaces which are typically optimization type designs before you understand the 1st order models and have a thorough understanding of the noise associated with the situation is inefficient and often ineffective.  

Mark's answers are, of course, right on.  I'll just add some thoughts to contemplate:

1. The situation you describe is actually the norm.  We live in a multivariate world.  There is most certainly a balance to optimization across multiple response variables.  You should run Analyze>Multivariate Methods>Multivariate (put all of the Y's in).  This will give Pearson correlation coefficients, and more importantly scatter plot matrices of the graphical relationships between response variables. This is also the answer to question 4.

2. Regarding hierarchy...think of it this way...If there is an active interaction that you want to take advantage of, you most certainly will want to manage both of the factors in that interaction.  Of course if you set one of those (say A), then you would only need to manage the other (B) with respect to that setting of A.  Adding the insignificant main effect into the model will likely result in a larger delta between your R-Square and R-Square adjusted and may decrease your p-values, but it is more important the model be realistic.

3. Ah, reasonable...My first step in analysis of ALL data is to determine if the results are off any practical value.  Did the response variable change enough over the study to warrant statistical evaluation (I call this practical significance)?  Was the experiment space representative of future conditions (a question of inference space)? If not, extrapolation is most certainly suspect.  How was the noise (factors not specifically manipulated) handled during the experiment?  

You can use the R-Square statistics (delta with emphasis on the R-Square Adjusted), RMSE, CV (across multiple Y's), p-values and residuals analysis to guide in model building.

 

"All models are wrong, some are useful" G.E.P. Box

View solution in original post

3 REPLIES 3
P_Bartell
Level VIII

Re: Response surface analyzes with multiple responses

The answers to all your questions could just about fill a basic book on DOE. My suggestion is pause with your work and complete all the DOE and modeling related modules within the free SAS "Statistical Thinking for Industrial Problem Solving" course. All of your questions are covered (plus many topics you didn't specifically ask about) there.

Re: Response surface analyzes with multiple responses

  1. Select the best model separately for each response. Save the best model as a column formula. Combine them with Graph > Profilers > Profiler.
  2. Yes, it is highly recommended that you maintain 'model hierarchy' when using a linear model with your response data. So if you decide to include a term A*B, then you should include both the A term and the B term as well, regardless of their own significance.
  3. Two questions:
    1. Use residual analysis during the model selection of each response. Also, have you performed model verification runs yet?
    2. Your trust in the desirability function depends on your trust in your decision about the limits of the response and the target of the response.
  4. Sure.
  5. Two questions:
    1. You can predict beyond the factor ranges. This form of prediction is called extrapolation. It is risky. JMP will adjust confidence intervals for the predicted mean response, but JMP assumes that the model is valid beyond the factor range. Only you know if the assumption is reasonable.
    2. The Prediction Profiler is the best tool when you have multiple responses and factors.
  6. There are two methods:
    1. From the data table, use one of these methods:
      1. Click the asterisk icon after the response name in the columns panel and select Response Limits. Update the information and click OK.
      2. Select the response column, select Cols > Column Info, select Response Limits in the property list, update the information, and click OK.
    2. From the Prediction Profiler, hold the Control key with Windows or the Command key with Mac, click the graph of the desirability function, update the information and click OK. Then click the red triangle next to the Prediction Profiler and select Desirability > Save Desirability if you want to keep the update.
  7. Use the JMP Help to see information in the Profilers guide.
statman
Super User

Re: Response surface analyzes with multiple responses

Based on your questioning, I would agree with Pete that you should study the subject before you waste resources. I mean starting with response surfaces which are typically optimization type designs before you understand the 1st order models and have a thorough understanding of the noise associated with the situation is inefficient and often ineffective.  

Mark's answers are, of course, right on.  I'll just add some thoughts to contemplate:

1. The situation you describe is actually the norm.  We live in a multivariate world.  There is most certainly a balance to optimization across multiple response variables.  You should run Analyze>Multivariate Methods>Multivariate (put all of the Y's in).  This will give Pearson correlation coefficients, and more importantly scatter plot matrices of the graphical relationships between response variables. This is also the answer to question 4.

2. Regarding hierarchy...think of it this way...If there is an active interaction that you want to take advantage of, you most certainly will want to manage both of the factors in that interaction.  Of course if you set one of those (say A), then you would only need to manage the other (B) with respect to that setting of A.  Adding the insignificant main effect into the model will likely result in a larger delta between your R-Square and R-Square adjusted and may decrease your p-values, but it is more important the model be realistic.

3. Ah, reasonable...My first step in analysis of ALL data is to determine if the results are off any practical value.  Did the response variable change enough over the study to warrant statistical evaluation (I call this practical significance)?  Was the experiment space representative of future conditions (a question of inference space)? If not, extrapolation is most certainly suspect.  How was the noise (factors not specifically manipulated) handled during the experiment?  

You can use the R-Square statistics (delta with emphasis on the R-Square Adjusted), RMSE, CV (across multiple Y's), p-values and residuals analysis to guide in model building.

 

"All models are wrong, some are useful" G.E.P. Box