cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
Feline614
Level I

DSD Interpretation and Next Steps

Good morning, I'm currently learning JMP and about DSD/optimization. I ran my first experiment and would like feedback on the analysis and next steps. I'm using JMP Pro 17.

 

The experiment objective: Maximize the capture/recovery of bacteria to magnetic particles.

 

The setup: I picked out the 7 variables I thought could influence the outcome and tested them using 22 runs (2 blocks - it takes me 2 days to run that many samples). The model returned that 3 variables were significant but it appears that 2/3 of the variables could benefit from increasing the testing range. I say this based on the prediction profiler graphs. This is possible and feasible (to an extent) so I can retest them with a wider range. 

 

1. Do I need to rerun the DSD with all 7 variables or should I just do a central composite design with the significant variables that covers the new ranges of those 2 variables? I'm leaning towards the CCD since I don't think the increased ranges will influence the insignificance of the other variables and then I can use the CCD to predict the optimal parameters.

 

2. I assume the DSD model could not predict the optimal parameters to use since I did not have the correct ranges, is this right? Or am I missing that output somewhere? I thought since <50% of my variables were significant that the design could then be used for optimization.

 

3. Am I missing something from the output that I should be looking at? Again, I'm new and the output has a lot of data to digest. My goal is to ultimately maximize the recovery of bacteria so I focused on what variables are significant and not really the rest of in the output. Are there specific "things" I should really focus in on/pay special attention to besides the alpha values? I did find some messages about the free online class for DOE so I will be taking that but I would appreciate specific feedback to my output so that I can "relate to" what that class will teach me with my own data.

 

I tried to attach the fit output but I keep getting an error message that the JMP file is not supported. I attached it as a PDF. 

 

Thanks in advance for the help/feedback! 

5 REPLIES 5
Victor_G
Super User

Re: DSD Interpretation and Next Steps

Hi @Feline614,

 

Welcome in the Community !

 

I'm afraid there might not be enough information to guide you precisely for your topic, but I will try my best to give you some help and feedback. For further guidance, could you share an anonymized dataset (Anonymize Data (jmp.com)) to better evaluate and help ?

 

Your next steps are based on the outputs of a model and the detection of statistically significant effects. It seems you have tried only one analysis type with "Fit DSD" (which is the recommended analysis for DSD), but have you tried other modeling analysis (possibly Generalized Regression with different estimation methods if you have JMP Pro like "Pruned Forward Selection" (with AICc/BIC validation method),  "Two-Stage Forward Selection" or eventually "Best Subset" (only if you have a limited number of factors, as it will test many models with main effects and many combinations of the higher order terms), or simply test Stepwise Regression as an alternative model) ?

It could be interesting before moving forward to see if different possible models agree on the detection of statistical effects, and check if/where they might differ. It could provide a more reliable overview about the important factors and effects in your study.

 

Concerning your other questions :

  1. You can use the platform Augment Designs to change the ranges of the factors and explore a different experimental area. This option might be more interesting than re-run or creating a new DoE (like a CCD), as the information gained from your first DoE will be kept to minimize the required number of runs of the augmented design.
    Concerning the necessity to keep all factors or select only the significant ones, it depends on their importance/contribution to the model (both statistically and in effect size), but also if their ranges will be changed or not in the augmentation. If you don't change the range of one factor that has been statistically non-significant and with low effect size, it may be possible to exclude it (and fix it at a specific value) from the augmented DoE to save some experimental budget. But if the range of one factor is changed, I would keep it in the augmentation, to make sure it doesn't become statistically significant and important on the responses depending on the ranges.
  2. It seems your model seems quite precise, except for 2 points that are apart from regression line. Was there any experimental change/error ? Are the conditions of the factors "unstable" or create high uncertainty in the response ? It might be worth investigating on these two points, and perhaps repeat them if possible, to differentiate a "model error" from an "experimental error". To answer your question, you also need to know what is your "prediction uncertainty acceptability threshold" and response target : here your model has an RMSE of 0.0797, how does it compare to experimental variability ? Is this RMSE value acceptable from domain expertise and taking into consideration repeatability/reproducibility uncertainty ? What is the maximum response you hope to obtain ? The response to these questions will be driven by the quality of the model and the domain experts acceptability thresholds (both for predicted mean response and predicted variance response).
  3. There might be some things to check regarding your outputs : Lack of Fit (jmp.com) test is statistically significant, which can be explained by various things. Maybe the low variance of your replicates (or their representativeness, for example if it's only centre points which may not be representative of the variability in your full experimental space) could be the reason (as you would have a low pure error), or some missing higher order effects in the model could be the reason.
    You can also check the two "strange" points mentioned before, and check the regression assumptions ("Plot Residual by Normal Quantiles" to check normality distribution assumption of the residuals, "Plot Residual by Predicted" to check that residuals are scattered randomly around 0 (homoscedasticity and linearity assumptions), Plot Residual by Row to check that the order of the experiments do not influence the results (independence of the observations assumption)).
    Also I would try using the Block variable as a Random effect (instead of a fixed effect in your model here), as I understand this represent the day in which the experiments have been done, so it's only a subset of the week/time and you are not particularly interested in the fixed effect of day on the mean response, but likely more by the influence of the day random effect on the variability of the response. You can try fitting a mixed model using the Fit Model platform, and specifying your block effect as Random : Mixed and Random Effect Model Reports and Options (jmp.com)
    The use of Day as random effect may also help better evaluate the variability of the response and may have a positive influence on the lack of fit test : variation will not be attributed to model error, so this term may decrease. 

 

Some ressources to help you on understanding DSDs :
Introducing Definitive Screening Designs - JMP User Community

Definitive Screening Design - JMP User Community

The Fit Definitive Screening Platform (jmp.com)

Using Definitive Screening Designs to Get More Information from Fewer Trials - JMP User Community

And more generally on DoEs :
DoE Resources - JMP User Community

 

Hope this answer will help you,

Victor GUILLER
Scientific Expertise Engineer
L'Oréal - Data & Analytics
Feline614
Level I

Re: DSD Interpretation and Next Steps

Hi @Victor_G I really appreciate you taking time to write me a thorough response!

 

I attached an anonymized dataset.

 

I ran the various models you mentioned (attached), except "Two-Stage Forward Selection" was not an option (it was grayed out). The stepwise approach added one variable at 0.12, which looked like it may have an influence on the outcome per the prediction profiler but biologically, in this specific experiment, I wouldn't expect it to be significant (I will be repeating this experiment with different bacteria and I expect it to be significant in those tests). The "Pruned Forward Selection" with AICc and Best Subset AICc bumped one variable to p=0.0564 but it would make biological sense for it to have a significant effect on the system. "Pruned Forward Selection" with BIC and Best Subset with BIC were similar to stepwise with the addition of the one at 0.12. I also deleted the variables <0.05 but they did not change the significance of the other factors.

 

Overall, I'm not surprised by the variables the models selected, except X_7, I thought for sure that one would have a significant effect. 

 

Do you have a good resource that explains these models at the beginner level? I found some resources on JMP but I only learned the Stepwise approach in college. I don't actually know how to interpret the other models you mentioned, besides the general understanding of a p-value.

 

1. I created an augmented design. I eliminated 2 variables (X_4 and X_5) because biologically they should not be significant in this system (they may be in a future test though). I left all other factors the same except the 2 factors that showed significance that appear to benefit from testing a higher range (X_6 and X_8). I did not change the default number of runs but did block them. Should I change the number of runs? The video you sent (Using Definitive Screening Designs to Get More Information from Fewer Trials - JMP User Community) at ~35:56 it states that a weakness of DSD is that "Factor range for screening may not include optimum so, follow on design will be over different ranges - really can't augment." But, like you suggested, I can augment it with different values... is this going to be an issue? Do I need to do anything differently when I go to analyze the data?

 

2. I can rerun those 2 points. When I enter the new results should I just replace those points or somehow create a new block to include the new values? If I need to create a new block, how do I do it? I can't explain why they appear to be erroneous; the one point to the left could have variability due to the low concentration of bacteria (Poisson distribution issue). The one to the right is a little more perplexing to me but I am working with bacteria so an outlier every now and then isn't abnormal.

 

For RMSE - I'm a bit confused about it. I'm tracking a low RMSE should be good and would reflect the model is good at predicting outcomes. I can't find any literature on anyone doing something similar to what I'm doing to know what to compare it to. Based on what I know about the system, I would not expect a lot of "noise" for this particular experiment. This particular bacteria and system give me pretty consistent results (I have more variability with a different bacteria but that is a problem for next week).

 

3. I plan to rerun the two "strange" points. I added the regression assumptions to the attachment "Fit Least Squares." The residual by predicted plot may have some clustering?

 

I agree, the blocking should be a random effect. I found on another discussion post you can do it when you initially do the design but I can't find the "box" to check (I assumed random was the default but I guess not). I made it a random effect, it did affect the even order effects. The "Lack of Fit" box is grayed out though... I'm not sure why or what that means. In simple terms, when there is an "even order effect," what does it mean when it is X_2*X_2. I understand when it is something like X_2*X_3 and that there is interaction between the two variables but I don't understand how a variable interacts with itself.

 

Thank you for the resources, they were extremely helpful, especially the "help pages." I'm still overwhelmed by the level of statistics but I appreciate your patience in teaching me and helping me be a better scientist. This will be my first of many DSDs, I'm glad I started with the "simplest" one!

Victor_G
Super User

Re: DSD Interpretation and Next Steps

Hi @Feline614,

 

Thanks for your response. It might be easier next time to provide a JMP file instead of several excel files and pdf files, as with a JMP file you can keep the analysis through saved scripts, and keep all column informations and properties (particularly for DoE as each factor may have up to 3 column properties, that I need to add manually when importing data from Excel). 

 

The different models you created seem quite consistent with a lot of similarities, so it's a good sign.
I'm surprised to not see any interactions or quadratic effects in any model, as DSDs are quite effective at detecting them (if there aren't too many of them). I just relaunched the Generalized Regression platform by including interaction terms and quadratic effects for X2 to X8 (and keep X1 as a fixed effect block), and models seem to benefit from these added terms, both in terms of explainative power (R² increases up to 0,95) and predictive power (RASE decreases to 0,05). It seems there may be an interaction between X6 and X8 and a quadratic effect for X2. I suspect the statistically significant Lack-of-Fit test seen on your initial model may be linked to these missing higher order terms.
The models also very much agree between each others about the included terms :

Victor_G_0-1716966135381.png

Since you are interested in screening and optimization for this initial stage with DSD, using an information criterion like AICc for the validation method in Generalized Regression makes sense, since you want to have a model that is both explainative (only keep the most important variables) and predictive (keep the variables that do help improve predictions/lower the RASE/RMSE). 

 

If you're looking for ressources about the Generalized Regression, here are my first thoughts :

Fitting a Linear Model in Generalized Regression 

Using Generalized Regression in JMP® Pro to Create Robust Linear Models 

 

About your other points :

  1. Augmentation can be done by keeping the active factors found in the previous analysis. Even if X5 may not be significant/important, it is still kept in models, so I would keep it. I prefer keeping an non-important factor in the augmentation phase than removing too quickly a possible important factor for the next phase.
    So based on the models results solely (to be confirmed with your domain expertise), I would keep factors X2, X5, X6 and X8 for the augmentation phase. Concerning the number of runs, it's hard to say as it depends on the possible factors ranges changes, assumed model and certainty/precision you want for the model. Since you keep a low number of factors (4), you can augment your design by assuming a Response Surface model, so clicking on "RSM" will add all 2-factors interactions as well as quadratic effects for the selected factors.
    The problem you mention is more present when factors ranges are disconnected/disjoint between initial screening and augmentation, but since you just want to augment a little more or move a little more the factors ranges, you can still benefit from the information of your initial design runs.
  2. Again, no definitive answer to your question. If you have different results, you can repeat the experiments several time (in order to have more confidence in the measured response).
    Depending if you have realized new experiments "from scratch" and measured them independantly, you can add new rows with the values : these new points will be replicates of the previous ones. If you only repeat the measurement part on the same experiments, you can replace the original value by the mean of the measurements : these new points are only repetitions of the experiments (so no independant new runs, no added rows). 
    You can then compare if the changes in the values of these 2 points improve the models' quality.
  3.  There doesn't seem to be a specific pattern in your residuals. The clustering you see might be caused by the fact that your response measurements are either high or low, and you don't have a lot of middle values (except for the two "strange" points). So looking at Residuals vs. predicted you will see 2 clusters of points, one for low predicted response values, and one for the highest.

The good part about using the Block as a Random effect is that through a Mixed model, you can assess if the blocking variable has a statistically significant effect on the response variance. In your case, it doesn't seem to be the case, so you may remove this blocking variable from your model. Running models with this blocking variable as fixed effect also shows that this block effect has no statistically significant impact on the mean response, so it can be removed from future analysis.

Concerning the term X2.X2, it's not an interaction but a quadratic effect of X2 : it means the response is linked to X2 by a quadratic function/term : Y = X2² (+ other terms in the model).

 

I attached the JMP file with all the models and scripts tested for this answer.

I hope this follow-up will help you,

Victor GUILLER
Scientific Expertise Engineer
L'Oréal - Data & Analytics
Feline614
Level I

Re: DSD Interpretation and Next Steps

@Victor_G 

 

I’m sorry you had to put in all the data by hand. Also, sorry for the delay but I wanted to respond with the next batch of results. I believe I attached the correct format this time.

 

I ran the augmented model. I only removed 1 term based on my ‘expertise’ and left all other significant and non-significant factors from the original DSD. Runs 1-22 are the original DSD; runs 23-34 are the augmented design; runs 35 & 36 are independent replicates of row 5; runs 37 & 38 are independent replicates of row 10 (row 5 and 10 were ‘outliers’ in the original DSD).

 

Can you check my logic so I can confirm I am running and analyzing the data correctly?

 

  1. I ran a mixed model with blocking as a random effect to check for significance. It was not significant in the model so I can drop the variable. My residuals don’t look as “random” as before, but I don’t think it is terrible.

 

  1. I ran Generalized Regression with multiple models with AIC. I believe all interactions were added. I know last time you stated you added interactions so I want to confirm I did the model design correctly this time.

 

How did you create the graph you embedded? I found these directions but it didn’t result in the nice visual comparison you provided: https://www.jmp.com/support/help/en/16.1/?os=win&source=application#page/jmp/model-comparison.shtml#

 

  1. I added the prediction profiler to the models. When I was trying to find the optimal conditions to maximize the output I realized the model predicts >1.0; however, the output is a proportion and theoretically 1 is the highest value possible. Then I tried changing the range for the output with minimum as 0, middle as 0.5 and max as 1 but it didn’t seem to change the predictor profile…

 

Biologically the variables marked as significant make sense. What does not make sense to me is X_6, the original DSD range was 1-5; it was augmented to do 1-20. Biologically doing anything more than 20 shouldn’t make a difference, which is what the least squares model shows but not the others.

 

How would you know which model to choose? How do you account for the max output value being 1.0? How would you go about looking at the optimal parameters for the variables?

 

  1. I was told to include X_1 as a variable in the model but I’m not actually convinced it should be a variable because in the “real world” that isn’t a variable I can control. I can control it in a laboratory environment only. However, the output is actually the proportion of recovered to starting and X_1 is the starting number so it is the denominator of the output variable. I thought X_1 should be held constant and the model optimized to that constant but the argument was made that we want to know how the variables interact if X_1 changes. I guess my confusion is should the variable X_1 be included if it is the denominator for the output?

 

Thank you!

Victor_G
Super User

Re: DSD Interpretation and Next Steps

Hi @Feline614,

 

  • Looking at the replicates values you have added, it seems you have a quite high variability in your measurement response. Expanding the ranges of X4 and X6 sound like a good idea, as this enable to have a wider range of values for the responses, which may help discriminate the differences more easily (augment signal to noise ratio).
    This experimental variability may also be confirmed by the random effect X7, which does appear as significant in the full mixed model you have created :
    Victor_G_0-1717569027445.png
    So as the random effect X7 appears to be statistically significant, I wouldn't remove it from the analysis : it indicates that there might be a statistically significant change of variability between your first set of experiments (original DSD) and the second one (augmented DSD).

  • Yes, launching the Fit Model platform with 2-factors interactions and quadratic effects in addition to main effects seems a good idea, as DSD is able to detect them if you don't have "too many" statistically significant effects (so if you have enough degrees of freedom left to detect them, and the size of these effects are large enough).
    Concerning the graph I embedded, it's not done naturally/automatically by default in JMP (unfortunately!), so I right-click on the "Active Parameter Estimates" from the report and then click on "Make Combined Data Table". Make sure all "Active Parameters Estimates" reports from all models are displayed to not miss one. You have then a new datatable with all values for the terms included in each model, which can help you compare the models. You can also do the same with the "Model Summary" report to have a general overview and comparison about the models in terms of several metrics (R², R² adjusted, RMSE, AICc, ...).
    In your case, visualizing the terms included in the different models lead to this graph :
    Victor_G_1-1717570291890.png

    So models pretty much agree on the terms included, which is a good sign.

  • If your response X8 is supposed to be in a range of 0-1, you can transform the response column with a Logistic transformation and use this transformed response in the modeling : Transform Columns in a JMP Platform
    Using this transformed response and the same models type in GenReg shows an improvement on Information Criteria AICc and BIC, without losing explainative (R²) or predictive (RMSE) power (and with similar terms included in the model and same model complexity):
    Victor_G_2-1717576456724.png
    I saved the script to these models with the transformed response so that you can take a look at it and evaluate this option.

 

Concerning your remark about X6, you increase the range to 20, but I wouldn't extrapolate model's predictions outside of the tested range. So it looks like there is an increasing linear trend of the response regarding X6, but maybe this trend stops after X6 reach the value 20. Or maybe not, and this would be a new fact/knowledge for you. But you can't be sure unless you test outside of this range. You can create validation/test points outside of the range and compare the measured values with the predicted values, to see if there is any deviation from the model that would indicate that the increasing linear trend may stop or be different outside of the tested range for X6.  
Since there is indeed a quadratic effect for X6 included in the Least Squares model and supported by your domain knowledge, you can also force the inclusion of these terms (main effects and quadratic effects for X6) in the model construction : Advanced Controls (jmp.com)

 

Regarding your general question, there is no definitive model to choose : you can select several similar models, if it makes sense statistically and experimentally (domain expertise). What I would do is :

  1. Launch a Mixed Model with X7 as random effect to assess the statistical significance of this random block.
  2. Launch several GenReg models and compare the terms included in the models, as well as models performances to have a better view on which terms are important/active. Evaluate and select informative models based on statistical evaluation and domain expertise.
  3. Since your random block effect is significant in the full model, build a refined mixed model with the important terms, based on the learning gained from the two previous steps. Assess if the random block effect is still significant, and create a model (mixed or based on generalized regression models) that best suit your learnings. Validating your model(s) may help you gain confidence in the model(s) and you can also combine several models using the platform Model Comparison (jmp.com) with the saved Prediction formula columns of your models.

About X1, this is linked to your objective and what you want to do with the results : are you investigating the system only in lab environment, or do you want your findings to be used in another environment ? If it's the second option, you may run some confirmatory/validation runs in this new environment, to make sure the findings in lab scale can be transposed/used in a different environment. It may not be a problem if X1 is handled differently between lab and other environment from an analytical point of view : in your lab environment, you can maximize desirability with no constraints added in the profiler. In your other environment, knowing the fixed value of X1, you can lock the X1 value and optimize the other factors to maximize the desirability : Set or Lock Factor Values (jmp.com)

 

Hope this follow-up will help you,

 

Victor GUILLER
Scientific Expertise Engineer
L'Oréal - Data & Analytics