cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
The Discovery Summit 2025 Call for Content is open! Submit an abstract today to present at our premier analytics conference.
Get the free JMP Student Edition for qualified students and instructors at degree granting institutions.
Choose Language Hide Translation Bar
View Original Published Thread

Design evaluation of optimal designs

Optimal designs use one of several optimality criteria based on the goal of the experiment. Using the design evaluation tools we can compare, and contrast designs based on the goal of the experiment. In each of the case studies below, the tools relevant for each experimental goal are explored.

 

Summary of practical goals and design evaluation tools

I-optimal

D-optimal

Practical goal: Accurate predictions

Practical goal: Understand factor relationships

Compare designs for prediction performance:

  • Prediction variance profile
  • Fraction of design space plot

 

Tools to evaluate designs for minimizing the error of coefficients:

  • Power analysis
  • Alias matrix
  • Color map on correlations

A-optimal

Alias-optimal

Practical goal: Understand factor relationships focusing on specific factors of interest

Practical goal: Understand factor relationships where main effects (ME) are unbiased by possible large active two-factor interactions (2FI)

Tools for comparing the effects of weights:

  • Power analysis
  • Color map on correlations

 

Tools to check for eliminating correlations among 2FI:

  • Alias matrix
  • Color map on correlations
  • Estimation efficiency

 

I-optimal: Minimizing prediction variance

For this case study, the goal of running an experiment is to find optimal operating settings for three continuous factors to maximize percentage yield. Interactions and curvature are considered, and the choice is made to conduct a response surface design.

The operational decisions made from the resulting regression model are based off how the three optimal operating settings will predict percentage yield. Therefore, it is important that the predicted percentage yield includes as little error as possible to make accurate operational decisions.

Given this information about the goal of our design, I-optimal is ideal because it minimizes the prediction variance. I-optimal is also the default option in JMP when using the RSM button in Custom Design.            

Tools to compare designs in terms of their potential prediction performance:

  • Prediction variance profile
  • Fraction of design space plot

I-optimal.mp4
Video Player is loading.
Current Time 0:00
Duration 2:05
Loaded: 0.00%
Stream Type LIVE
Remaining Time 2:05
 
1x
    • Chapters
    • descriptions off, selected
    • captions off, selected
    • en (Main), selected
    (view in My Videos)

    D-optimal: Minimizing error of coefficients

    For this experiment, the goal of the experiment is to identify which effects from six continuous factors are active in influencing the percentage yield. We are only interested at this point in determining the main effects to screen the six continuous factors for active effects.

    The operational decisions made from the resulting regression model are based on the parameter estimates of the continuous factors and their statistical significance level in predicting percentage yield. Therefore, it is important that the parameter estimates, or coefficients, have as little error as possible to make operational decisions on which factors are active for percentage yield.

    Given this information about the goal of our design, D-optimal is ideal since it will minimize the error of the coefficients of our factors and is the default for main effect models in Custom Design. 

    Tools to evaluate designs for minimizing the error of coefficients:

    • Power analysis
    • Alias matrix
    • Color map on correlations

    D-optimal.mp4
    Video Player is loading.
    Current Time 0:00
    Duration 2:24
    Loaded: 0%
    Stream Type LIVE
    Remaining Time 2:24
     
    1x
      • Chapters
      • descriptions off, selected
      • captions off, selected
      • en (Main), selected
      (view in My Videos)

      A-optimal: Weighting different parts of the model

      For this case study, we want to run an experiment to examine the main effects and 2FI of five continuous factors to maximize percentage yield. One of our five factors is temperature, and we are especially interested in the effect of temperature and temperature’s interaction with the other four factors. 

      The group of parameters we would like to emphasize includes the main effect of temperature and temperature’s interaction with X1-X4. By putting more emphasis on these factors, in our resulting design we place more importance on factor combinations that lower the variance of the estimates for these temperature terms than the other factors.  A-optimal designs are flexible and allow the weighting of different groups of parameters. Under the red triangle, there is an option for Advanced Options and then A-optimal weights. A weight of 10 is applied to the factors including temperature.

      A-optimal is similar to D-optimal in that it focuses on the parameter estimates rather than the response like I-optimal. A-optimal minimizes the average variance of the parameter estimates.

      Tools for comparing the effects of weights:

      • Power analysis
      • Color map on correlations

      A-optimal.mp4
      Video Player is loading.
      Current Time 0:00
      Duration 3:52
      Loaded: 0%
      Stream Type LIVE
      Remaining Time 3:52
       
      1x
        • Chapters
        • descriptions off, selected
        • captions off, selected
        • en (Main), selected
        (view in My Videos)

        Alias-optimal: Eliminating correlations among 2FI terms

        Like in D-optimal, the goal of running the experiment is to identify which effects from six continuous factors are active in influencing the percentage yield. The difference between the D-optimal example is we believe there are possible large 2FI effects that are active. We are only interested at this point in identifying the main effects to screen the six continuous factors for statistical significance and have a small budget of runs. The six factors lead to a potential of 15 interactions which would be a minimum number of runs of 22. 

        The operational decisions made from the resulting regression model are based on the parameter estimates of the main effects. Therefore, the parameter estimates of the main effects must have as little bias as possible to make operational decisions on which factors have an impact on percentage yield.

        Given this information about the goal of our design, Alias-optimal is ideal since it will eliminate the correlations among the 2FI terms without having to run a design including all 15 interactions for 22 runs. The Alias-optimal design is 12 runs for six continuous factors.

        Tools to check for eliminating correlations among 2FI:

        • Alias matrix

        Cost for choosing Alias-optimal over D-optimal:

        • The main effects are no longer uncorrelated like the D-optimal design.
        • The confidence intervals for the main effects are about 10 percent larger than the D-optimal design.

        Alias-optimal.mp4
        Video Player is loading.
        Current Time 0:00
        Duration 3:35
        Loaded: 0%
        Stream Type LIVE
        Remaining Time 3:35
         
        1x
          • Chapters
          • descriptions off, selected
          • captions off, selected
          • en (Main), selected
          (view in My Videos)

           

          Comments
          shampton82
          Level VII

          Hey @O_Lippincott !

          Great bog post!  I have a question regarding Prediction Variance and how to think about what its effect is on the results of a DOE.  If I'm looking at the Variance Profiler, if I take the square root of the value in the profiler, would that basically be giving me the estimated stdev of the response value at a given DOE level?  So I might look great on correlation and power but the confidence around my results might be too high to be useful depending on the intended application and the variance profiler would be the way to determine that?

           

          thanks for any insights!

          Steve

          O_Lippincott
          Staff

          Hi Steve  @shampton82 ,

           

          Yes, if the desired outcome  for your DOE is to make accurate operational decisions as opposed to screening for active effects than prediction variance is an important design diagnostic to pay attention to in addition to correlation and power.  A common practice is to check various design diagnostics to determine if the design is adequate for your experimental goal and to aid in comparing potential designs to each other to understand the strengths and weaknesses of each design. This section in the JMP help on the Prediction Variance Profile has a nice description of the background math.  

           

          -Olivia