Cedrick, I don't have any idea how familiar you are with experimental design. It is not a casual discussion. There are many elements to designing and analyzing experiments, but I would say, if you have designed the experiment well, analysis is quite straight forward. By the way, in sone instances directed sampling is more efficient (e.g., components of variation studies) and should be done prior to experimentation.
"If your result needs a statistician, then you should design a better experiment."
Baron E. Rutherford
I will briefly address your questions in general, but suggest you find an avenue to develop your understanding of the methodology (e.g., self-study, take some classes). I typically teach the methodology over 6 months of intensive training, but it takes year of practical application to fully unleash the power of experimentation.
First thing to remember is learning is iterative. This is true of your experimental plans as well. The first experiment is intended to design a better experiment.
All experiments should begin with design. Is this an investigation that is explanatory or are you developing a predictive model? What questions are you trying to answer? Where are you in the knowledge continuum? What knowledge are you trying to gain? What hypotheses do you want insight into? How will those hypotheses be represented by factors? What levels can/should the factors be set (e.g., if you are low in the knowledge continuum, be bold but reasonable)? How will noise be handled (i.e., how will you handle factors that you are not willing to control in the future? What are the appropriate response variables? Is the opportunity to learn about central tendency, variation or both (if variation, do you have a response variable in the form of variation)? Are the measurement systems adequate? etc.
I recommend designing multiple experiments. Compare and contrast them for potential knowledge gained (e.g., what effects can be estimated, what effects will be confounded, what effects are not in the study) to resources required. Predict all possible outcomes of each plan and weigh this against the resources. The pick one and run it. Be prepared to iterate.
Analysis with multiple responses may start with multivariate methods. This is two fold:
1. Assess correlation between the multiple responses (responses that correlate strongly will have similar models)
2. Look for multivariate outliers (e.g., Mahalanobis)
Looking for outliers in DOE data sets is paramount. This is because you have very small data sets and therefore singular data points can have an influential impact on the analysis.
To analyze each response, I always follow a simple sequence: Practical, graphical, quantitative. In that order.
First, did the response variable vary enough over the design space to support further analysis (i.e., is there a practically significant change in the response over the design space)? How did the responses compare to your predicted values (this assumes you predicted the results á priori)? Does it make sense? Are there obvious patterns (I use ANOG) or unusual data points? How does the data relate to your hypotheses?
Then use graphical analysis. Plot the data (for each response). Plot in ANOG order, plot in run order. Normal plots, Pareto plots, etc.
"Results of a well planned experiment are often evident using simple graphical analysis. However the world’s best statistical analysis cannot rescue a poorly planned experimental program."
Gerry Hahn
For quantitative analysis, I suggest using a subtractive approach to model building. That is, I recommend starting with a saturated model and remove insignificant terms from the model. When designing experiments, you should recognize your design is a function of the model you are hypothesizing. As you simplify/reduce the model, use statistics to help. R-square-R-square adjusted delta, RMSE, p-values, CV, etc. and residuals analysis.
"A good model is an approximation, preferably easy to use, that captures the essential features of the studied phenomenon and produces procedures that are robust to likely deviations from assumptions."
G.E.P. Box
Some important points to keep in mind:
Statistical significance is a conditional statement! For experimental design, you are comparing the variation due to factors being manipulated to the noise that changes during the experiment under the inference of the conditions that are not changing. If any of these changes, so may statistical significance.
Extrapolation of experimental results is managerial or engineering decision, not a statistical one.
"All models are wrong, some are useful" G.E.P. Box