Potentially daft question of the day.....
......but it was something that just occurred to me when preparing some training material for some colleagues.
I rely quite heavily on the Fit Two Level screening option for my Formulation work- i'm often screening multiple parameters against multiple outputs and a more traditional approach sometimes misses some of the secondary interactions that invariably, are the thing i'm looking for.
I did notice that it's a little 'clunky' to add the factors to the model especially as I can have anything up to 20 factors/outputs at one time. The factors only seem to autofill to the output you clicked on and not any of the others. Is there a way to use the 'make model' option at the bottom of the screening output apply to ALL outputs at the same time, rather than having to add the crosses and significant outputs manually for each other outputs, prior to running the model?
I ask as It would be really nice to be able to go through the screening output and add the factors that you're interested in (as I think it's a nice visual explanation of your data), and then have them auto-populate into the model that you're going to run. Unless i'm missing something and this is possible already?!
I am not completely clear about your question from your description but I understand that you are using the Screening platform. This platform applies a couple of simple techniques to identify important factors and their effects. It should be used only with two-level factors. It should not be used in a case of restricted randomization (e.g., split-plot design). See this page in the JMP Help system for more information.
You said, "I did notice that it's a little 'clunky' to add the factors to the model especially as I can have anything up to 20 factors/outputs at one time. The factors only seem to autofill to the output you clicked on and not any of the others."
The Screening platform does not use a specified model like Fit Least Squares or Generalized Regression, even if you have a Model table script. It builds orthogonal contrasts on its own based on the screening principles of sparsity of effects, hierarchy principle, and the strong heredity principle. The Screening platform will not provide good information if these principles are violated. (It won't report an error as there is none in the calculations - it is up to you to verify model assumptions.)
It uses a simple rule to initially select contrasts based on the individual p-value: greater than 0.1. You can interactively select or deselect contrasts in the list using shift-click or control-click or command-click actions.
You said, "Is there a way to use the 'make model' option at the bottom of the screening output apply to ALL outputs at the same time, rather than having to add the crosses and significant outputs manually for each other outputs, prior to running the model?"
I do not understand what you are talking about. Are you asking if it is possible to select the same term across all the responses in the Screening platform? It is not. Are you asking if it is possible to broadcast the command (click Make Model) to the other responses? It is not. The analysis for each response is separate.
I hope that my reply clarified the behavior of this platform and not added more confusion about it.
Hi Mark, Thanks for the response.
Apologies for the confusion, I was probably not explaining myself very well. You have however answered my question; in that the 'make model' applies only to the output it is associated with, so thank you.
I should say that the approach i'm using was developed in conjunction with one of JMP's registered training partners who are experts in the system. I guess I was trying to find a shortcut to make it quicker!
Of course if you use Make Model for one response, you have a Fit Model dialog. You could replace the response with any other column at that point. You could also save that model to the data table to easily recall it later. You could also fit that model for the response and use the Column Switcher to use that same model form with other responses. Perhaps one of those routes will work for you.
I wondered about using the same terms in a model for every response. We usually select the best model separately for each response because we do not observe all the same effects in every response. What is the purpose of using the Screening platform with all the responses if you are ultimately going to fit the same model?
Just a bit confused.
It genuinely helps us to do it this way. If you use the 'same' model for the whole set you can fit the results together into one report, complete with linked Profilers. This is very useful, especially when presenting to non-JMP users who can sometimes get lost if you're jumping from window to window.
To get around the very issue you're mentioning (which you are, of course, 100% correct about), I can remove the terms that aren't relevant to each specific output by using remove in the Effect Summary; with the added benefit of being ale to see how it effects the model, the RMSE, RSq and PValue 'live'.
I fully understand that this is probably 'backwards' to how you would recommend but i've found it to be a very powerful way to explore my data and 'play around' with the outputs (whilst keeping an eye on the statistical relevance).
This falls down currently when you have crossing effects that aren't identified in each output. As you then need to either specifically cross them in the model dialogue, or add them in the effect summary of the Fit group model, hence my probably naive (and very confusing) question!
No problem. Glad you found a way that works for you.
Just an additional comment...whenever you have more than one Y, I recommend running Analyze>Multivariate Methods>Multivariate to get a look at the relationships between the Y's. For the Y's that correlate well (and the correlation is supported by theory), you can reduce the number of different models you will need to evaluate.
There are no labels assigned to this post.