Hi @Victor_G,

Once again thanks for the response. I feel like I'm getting a better idea of how everything works. I guess I should add that the goal of these experiments is to minimize the responses and that creating a predictive model isn't the goal. Really I just need something that's good enough to tell me with decent confidence what the best combination of factors is that can minimize the responses.

Its probably useful if I give you some more background on the actual experiment. Its essentially a dissolution experiment but the responses are a proxy for how much material is left undissolved. So as the responses go down this shows better dissolution. We expect the Reponses to be very high at the least optimal conditions showing close to no dissolution, but as we reach more optimal conditions this value will approach zero with zero meaning complete dissolution of the DP, which is the goal.

Factor A is the percentage of one solution a binary solution of Aqueous and organic solvents.

Factor B is a stir time in minutes. For practical reasons this can not be extended beyond 180 minutes.

Factor C is whether Factor A is neutral or Acidic.

Previous experiments have been done with a neutral solution so I was not completely in the dark with what the results should be. It was shown previously that the optimal Factor A was very close to the midpoint with a neutral pH solution and that longer stir times meant more dissolution. I had a hypothesis that acidifying the solution might increase the dissolution so I opted for this experiment which was sort of a head to head of the neutral solutions with the acidic solutions. I thought that acidifying the solvent though might change the dissolution profile and shift the optimal level of factor A either to the left or right which is why the range is set relatively wide.

From the results obtained so far this does not appear to be true and if anything acidifying the solution appears to make the responses worse across the board. At the same time I would like to dig a little deeper and show that this remains the case when Factor A is near its optimal level. In addition I would like for the model to show that dissolution increases with factor B as this would be expected. at least for the neutral solutions anyway.

You mentioned that of primary concern was residual by row plot where there appeared to be a non-random pattern. This appears to be autocorrelation which I would agree. I can confirm that the order of the experiments in the data table does match the order in which the were actually performed.

I had a think about how the experiment was performed and after careful consideration I believe that this may have been caused by carry over effects. This makes the most sense to me for the system. Steps were taken to prevent this but looking at the data it appears that this was not completely removed from the experiment and so I can look at taking extra measures in the future to remove this completely.

With this in mind would it still be appropriate to perform an augmentation of this experiment? I would like to keep the data generated so far as it does match up with past experiments that have been performed in the broad strokes.

At this point I really only have one singular experiment left before unfortunately I have to pick some parameters to move forward with. I expect the optimal parameters to be somewhere around the midpoint of Factor A. The expectation is also that dissolution increases with stir time but whether there is a meaningful difference between 10 minutes and 180 minutes when factor A is near optimal is not yet borne out in the data. And lastly I'm not sure if Factor C is important when Factor A and Factor B is near optimal.

As such my thought is to augment the experiment from the ranges of 45-55 for factor A while keeping the other factors the same. 9 more runs seems appropriate as you explained earlier to include the blocking factor. Is does seem to be a pity that it won't let you add more center points in the augmentation process as really I would like to see how Factors B and C behave when factor A is exactly at the midpoint.

As for data transformation, it certainly seems from the Box-Cox values that somewhere between a ln or square root of y would give better results so I will likely use those transformed values in future for the model.

Lastly you mentioned that the blocking factor did not seem like a significant explanatory variable where did you get that information from?

Thanks,