Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- JMP User Community
- :
- Discussions
- :
- K fold Cross-Validation in DSD

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Jun 14, 2018 12:25 PM
(1820 views)

Hello,

I have a quick question. If you analyse DSD the literature (instructions in JMP) proposes two methods. The first is Fit defintive screening and the second is Forward stepwise regression, with *Stopping rule: min AICc *and *rules: Combine. *I was thinking about using k fold cross validation, if Forward stepwise regression turns out to give the better model than the Fit definitive screening. But does that even make sense with DSD? DSD is a foldover design and I think that doing model on only, for example, 3/4 of the runs, wouldn't make sense. Am I right or wrong?

Thank you for your answer.

Danijel

1 ACCEPTED SOLUTION

Accepted Solutions

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

First of all, there is no method that I know of that is guaranteed to find the best model. (That is, if you can even define "best.")

Second of all, this case is a screening experiment in which economy is one of the most desirable characteristics of the design. That economy means that a DSD is a small design, compared to the minimum number of runs for the linear model with only main effects. Holding out any data will affect the estimation and testing (power). You might be able to compensate by add extra runs, but then you might just as well use custom design.

Third of all, K-fold cross-validation is useful with small data sets but the analysis and unique set of benefits of the DSD depend on the unique structure of this design. The omission of each fold in turn would destroy this structure and compromise the fitting.

Would it work? Most likely. Would it work well? Not likely.

It can't hurt to try it, instead of thinking about it. Simulation would be a valuable approach. JMP makes that approach easy.

Learn it once, use it forever!

2 REPLIES 2

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

First of all, there is no method that I know of that is guaranteed to find the best model. (That is, if you can even define "best.")

Second of all, this case is a screening experiment in which economy is one of the most desirable characteristics of the design. That economy means that a DSD is a small design, compared to the minimum number of runs for the linear model with only main effects. Holding out any data will affect the estimation and testing (power). You might be able to compensate by add extra runs, but then you might just as well use custom design.

Third of all, K-fold cross-validation is useful with small data sets but the analysis and unique set of benefits of the DSD depend on the unique structure of this design. The omission of each fold in turn would destroy this structure and compromise the fitting.

Would it work? Most likely. Would it work well? Not likely.

It can't hurt to try it, instead of thinking about it. Simulation would be a valuable approach. JMP makes that approach easy.

Learn it once, use it forever!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: K fold Cross-Validation in DSD

Thank you, I was also thinking similarly.