cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
ian-weykamp
Level I

Evaluating DoE with (essentially) Run Order as a factor

Hi,

 

I have a relatively simple design that I'm not sure how to evaluate in JMP.
We're trying to find an I-Optimal design for two factors.  One (X1), is a continuous factor with 2 levels and center points. The second (X2) is essentially just the run order times 4. The order of this variable cannot change.  We know that X2 leads to changes in our response and want to properly model it's impact and interaction with X1 as well as assessing quadratic effects. I've attached a sample data set as an example.

The issue we're running into is appropriately setting up and evaluating the design in JMP. 

 

Any help would be greatly appreciated!
Thank you.

3 REPLIES 3
statman
Super User

Re: Evaluating DoE with (essentially) Run Order as a factor

Interesting, but I am confused.  

My thoughts: How can run order be a factor?  Let's say it is significant, and the 3rd level gives the best results.  How can you possibly operate with this knowledge?  How would you run this process?  Accordingly, you also could optimize the interaction of X1 and X2 from a practical sense.  Now, if you say factor X2 is time, the you have a continuous factor with multiple levels.  Understandably, you would have a restriction on randomization as X2 can't be randomized.  What is changing in time? 

Another thought, is to just run each level of factor X1 4 times (in order) for a total of 12 runs.  Essentially an OFAT over time.

Another thought is to do sampling (vs. DOE) of each level over time...not restricted to 4 specific times.

 

I'm sure others will chime in.

"All models are wrong, some are useful" G.E.P. Box
ian-weykamp
Level I

Re: Evaluating DoE with (essentially) Run Order as a factor

Perhaps some additional clarity:
We can think of X2 here as a very expensive tool that is used 4 times per measurement and is only used once per process.  The readings for X2 will be 4, 8, 12, ...  We know that as the number of uses of this tool increase, there is an impact on our response. Thus, we cannot reorder X2, and it can be thought of as essentially the run order of the experiment as it is increasing with run order.

Our aim with this design is to model a range of acceptable values of our response over a previously establish limit of X2 values. Essentially the processes is run up to a certain point number of runs, then stopped.

Victor_G
Super User

Re: Evaluating DoE with (essentially) Run Order as a factor

Hi @ian-weykamp,

 

If I may j(u)mp in the discussion, here are my thoughts regarding your initial question and your comments.

There might be several options to deal with your situation, if I understood your situation well, and here are some :

 

  1. As you expect some "degradation" of your response based on the number of uses of your tool, it may be interesting to consider a degradation analysis, to have a model linking the number of times the tool is used and its impact on the response ? I'm not entirely sure you can do degradation analysis with another factor, but other members of the Community more familiar with this type of analysis may have answers on how to deal with this.
  2. If you expect to use several "tools" in your experiments due to this limit of uses per "tool", why not running also a measurement system analysis (complementary to your initial topic) ? That may help you define the part of total variance in your experiments that you can attribute to (the difference between) your tools. And if you know with a degradation study how much runs you can have "safely" with one tool, knowing the variance due to the change of tools may help you in comparing the results coming from two tools in an experimental design.
  3. Knowing the number of runs possible with one tool, you may be able to screen more factors of interest by using "tool" as a blocking factor : 4/8/16 runs per tool possible for example.
  4. As the number of runs (your X2 column) can't seem to be randomized nor blocked, you could maybe switch the design role to "Uncontrolled" instead of "Continuous". See Factors (jmp.com). Creating the design with the correct role enables to have a more reliable model, instead of trying to modify the design afterhand to better suit the needs. 

 

I hope these few options may help you in your reflexion and experimental setup.

Victor GUILLER
Scientific Expertise Engineer
L'Oréal - Data & Analytics