Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Aug 22, 2017 4:36 AM
(5601 views)

Hello,

I have a question about the analysis of a response surface design.

In my design I have 4 factors and 3 responses.

I launch the fit model platform (which is already completed by JMP) then from the Effect Summary I remove the non significant effects. After that I look at the Prediction Profiler and I maximize the desirability regarding my 3 responses.

However some effects are not significant for all responses. Thus I wonder if I should rather run an analysis for each response separately, keep only the significant effects in each model (so I may have 3 different models), save the prediction formula and finally launch the Profiler Platform from the Graph menu to maximize desirability.

What do you think about that?

Thanks in advance for your advices!

1 ACCEPTED SOLUTION

Accepted Solutions

Highlighted
Having a different model for each response strikes me as very reasonable - After all, there is no reason why, on physical, technological or mecahnistic grounds the pattern of response should be the same. This also brings up the related issue of if, and then how, a model should be refined by dropping terms. Statistical considerations aside, such choices should be influenced by what the rersulting model will actually be used for.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

4 REPLIES 4

Highlighted
Having a different model for each response strikes me as very reasonable - After all, there is no reason why, on physical, technological or mecahnistic grounds the pattern of response should be the same. This also brings up the related issue of if, and then how, a model should be refined by dropping terms. Statistical considerations aside, such choices should be influenced by what the rersulting model will actually be used for.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Highlighted
##

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: Question about response surface analysis

Thanks for your answer Ian I will keep on exploring my data set!

Highlighted
##

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: Question about response surface analysis

I concur with Ian. In fact, we generally recommend this practice in training whenever there is more than one response. Taking this idea a step further, it means that you can model each response with the best possible model: linear model, neural network, and so on and then combine them for optimization. Also, you might have models to be included in the optimization that came from experiments or theory from the past. This set up is very flexible.

Learn it once, use it forever!

Highlighted
##

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: Question about response surface analysis

Thanks for the precisions Mark!

Article Labels

There are no labels assigned to this post.