To add to my colleague Lou Valente's contribution above, if I'm interpreting your question as 'When in the analysis workflow sequence should I test for normality of (something)?' The answer is largely dependent on the practical questions you are trying to answer AND the analysis methods you are employing to answer them. For example, if your ultimate goal is to say, calculate process capability indices, some indices are fairly sensitive to the normality assumption so testing the raw data for normality BEFORE any analysis is generally considered a worthwhile few minutes spent.
On the other hand, lets say you are analyzing a designed experiment and (especially if you made this cardinal DOE execution mistake) you failed to run the experiment in random order. Then you may have had a nuisance/noise factor (machine warm up effect let's say as an example) whose influence you'd like to be able to detect if in fact it rose up to influence your results, then a time series plot of experimental execution order and the residuals (which means you've got to fit a model first) is warranted as your first line of defense in detecting the presence of the nuisance variable.