I'll take a stab at this, although I think you probably need to be more specific to get better advice. I'm not sure how you are distinguishing "cleaning" data from "preprocessing" but I would advise against the step you list. I never recommend removing outliers - at least until fairly well into an analysis when you have some understanding on why the outliers are, in fact, outliers. I also would not recommend normalization or rescaling data as a general practice (though it may become useful for particular contexts) - many JMP analysis platforms automatically normalize data where it is helpful. Similarly, imputing missing data is unnecessary in most JMP platforms as it is done automatically (usually by just checking a box to include missing). There are times you will want to impute missing values more carefully, perhaps using your own methodology (e.g., building a regression model to impute missing data), but this again will depend on the context. So, I don't recommend doing any of the things you list as an automatic thing. Instead, I'd begin by graphically examining your data to make sure you understand what is being measured, what types of relationships seem to exist and are potentially important, and to ascertain whether some "preprocessing" is a good idea.