I am happy to do that, but this brings me back to where I started. To quantify what is going on in order to have you advise if I am simply being overly concerned, when I set both time and temperature to continuous and enter the linear model, along with the linear constraints, I end up with an average prediction variance of ~0.4. When I round the values in the table afterwards, it increases to ~2.8. This is what concerned me initially since the average prediction variance is ~7 times larger. I imagine that if the difference expected in the response is large that this may be less of an issue, but I am not certain that this will be the case. I am interested to hear your thoughts.
Personally, I agree with Mark's advice. The problem you are fighting is the definition of continuous. Is the factor continuous or not? It is, so it should be treated that way. I understand the average prediction variance increases quite a lot, but that is due to the constraints that you are putting on the system including the integer-based temperatures. What about the maximum prediction variance? What about the minimum prediction variance? More specifically, how do the Fraction of Design Space plots compare? For this situation, I think that picture will tell a better story than just the average prediction variance. You really want to see the "distribution" of prediction variances, not just a single point.
When you change from fractional temperatures to an integer, my guess would be that the maximum prediction variance change is large causing the average to go up. Remember, with the linear constraints your design space is smaller, so a change out at the high "fraction of space" will have a bigger impact on the average.
Can you show the I-optimal design from JMP data table before) and then the resulting design after you round the levels (data table after)? I am surprised that rounding to the nearest degree over such a wide range would cause the prediction variance to increase that much.
Hi Dan and Mark,
Thank you once again for your feedback. As I was preparing some screenshots, I noticed something odd that may be contributing to the problem. When I first generate the design, if I look at the design evaluation I get an average prediction variance calculation of ~0.4. However, if I then click on the "Make a Table" at the bottom of the dialog box and then click on "evaluate design" on the right, prior to rounding anything, then the calculated average prediction variance (APV) increases to ~1.4. I have reproduced this multiple times. Given that both designs are identical, I'm not sure why the APV is changing, let alone changing that much. I'm running short on time, so I'll have to post the screen shots later today, but if you have any idea where this changing APV issue could be coming from, maybe I need to solve it first.
I have not seen this behavior before. Then again, I wasn't looking for it like you!
I would report this anomaly to email@example.com for investigation. I expect adding constraints to introduce correlations among the estimates that affect the performance of the design. I do NOT expect making a data table with the design to affect the performance!
(One other thought: right-click on the table in the Design section of custom design and select Make Into Data Table. Use the design evaluation command in the DOE menu to see if you get the same results as in the custom design platform. Repeat this examination with the data table produced by clicking Make Table. What do you see?)
I would also proceed to use the design as it is in the data table. I am confident in the approach that you used. As always, be careful when performing each run and during the analysis.
Please report here anything that you might learn from JMP Technical Support. Of course, do not hesitate any more questions that you might have as you proceed.
This is really interesting. So, as I mentioned, the APV was ~0.4 initially in design dialog box right after the design was created. If I clicked the "Create a Table" button and then chose "Design Evaluation" on the left, I got an APV of ~1.4. However, if I do as you suggest and create a table by right clicking on the table in the design dialogue box, and then selecting "Evaluate Design" under DOE, the APV is then ~0.2. Three different APVs for identical designs. I think I will go ahead and report this to firstname.lastname@example.org to see what they have to say. While I'm not ruling out user error, I'm not sure what it could be at this point. I'll be back in touch once this is, hopefully, sorted out.
I have discovered a couple more interesting things, so I wanted to take a moment to update this thread in case it will help anyone in the future:
1) It turns out that when right-clicking to create the table and then selecting design evaluation from the DOE menu, somehow the higher order model terms are dropped. This is why the average prediction variance (APV) drops considerably to ~0.2. So obviously this figure is not correct. It is inefficient to have to add all the terms back, so this is probably not the best approach.
2) If I create the custom model and save it, then round all the values and save it, and use the "Compare Designs" feature under DOE < Design Diagnostics, then the two models are virtually identical in every respect, including APV, which in this case is about ~0.5. Of course they aren't exactly the same, but certainly the APV of the rounded model is _not_ 7 times larger. So, this appears to be the only reasonable way of comparing the APV of two models.
Ultimately, both of you are correct that sticking with "continuous" for the factors and rounding doesn't make a huge difference, but it isn't necessarily obvious given the APV weirdness unless the Compare Designs tool is used.
Anyway, thank you for your help. I doubt that I would have figured this out without all of the feedback.
Please report this phenomenon to JMP Technical Support as promised. This behavior is not good and such a work-around should not be necessary.
I reported the problem(s) on Friday and also pointed them to this thread for more information.