I am providing a series of steps that could be applied given I have very little idea of what the response are or what the factors are (or the hypotheses you are interested in understanding). I always suggest you look at the data from the lens of the people that have the scientific or engineering hypotheses and understanding of the situation. I performed some steps and saved the scripts to the data file so you can recreate them.
1. Did the response variables change an amount that is of scientific or engineering value? You did indicate directions, but I am asking for change in Y. How much is of practical significance for each response? If so, continue, if not consider three possibilities:
The measurement systems lack discrimination
The factor levels were not set bold enough
None of the factors have an appreciable effect.
2. I recoded and cleaned up the data table. I made the Factors continuous where the levels seemed to be continuous., and made the response variable continuous as well. Of course this should be done with better knowledge of the actual factors and response variables.
3. I ran multivariate analysis of the response variables. I did this for two main reasons:
to see if there were any patterns showing correlation of the response variables (this should be done after predicting whether correlations are suspected or not, but I don't know what they are so I couldn't determine whether the correlations make sense or not),
to determine if there are any multivariate outliers that should be considered.
4. I sorted each response variable from best to worst and plotted moving range charts to test for outliers. (I did not save the scripts for this. This is done for each response independently, and I attached the journal). I could not ascertain what the target was for response 4? Response 5 is binary, so the measurement system may lack discrimination. Note: I like to look at the patterns in the factors associated with the patterns and the "jumps" in the response variables. I call this the ANalysis of Good (ANOG).
4. I ran fit model for all responses (saturated models) and created Normal Plots and Pareto Plots to assess statistical significance and practical significance respectively (but I don't know the practical significance). Also created interaction plots and prediction profiles.
5. I took a shot at reducing the models for each response (based solely on statistical significance without regard to practical significance and re-ran fit models mainly to look at residuals.