Hi @Chemist9,
Ok, the model looks indeed much better from the screenshots you send.
How many centre points do you have in your RSM ? If you look at my previous link from StatEase, common number of centre points is usually around 3 to 5. If you have a higher number of centre points, it's possible that you have increased drastically the sensitivity of the lack-of-fit test, and that the small p-value found is a "false positive". You can try to "hide and exclude" some centre points to see if the test diagnostic is still the same or not.
You can also try to add other terms in your model manually in the Fit Model window. Which type of Central Composite RSM design have you chosen ? Face-centered ? If this is the case, that means you have three different levels per factor, so the order of your model will be limited to 2. You can still try to add partial cubic terms in the model, which are interactions between main effects and quadratic effects. These terms are usually rare, but if one of them is active, it could significantly improve your model.
Finally, think about how you validate your model:
- What is the purpose of this model ? Optimization, prediction, knowledge building, etc... ? What are your criteria for success ?
- Does the model behaviour match your domain knowledge and expectations ?
- Are the precision of the predictions practically acceptable ?
- Are you planning to validate your model over the entire experimental space (by testing new experiments in untested locations of the experimental space) ? Or are you interested only in the optimum found by the model ?
These non exhaustive questions can help you define when to stop improving your model and start using it.
Hope this answer will help you,
Victor GUILLER
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)