Whenever I start a predictive modeling exercise, especially if I inherited the data from somewhere else with little knowledge of how, where, when, and under what circumstances the data were collected, I spend some time in what I call 'getting acquainted with the data' mode. I look for things like data quality, unusual or supicious observations, missing values (you have none of these), nonsense values, and any other feature that sticks out at me that might make modeling problematic. I always start with the Distribution platform to just get a feel for "Where's the middle, how spread out is the data, and is there anything odd or unusual going on?" From there especially with a relatively small set of predictor variables, I just use the Fit Y by X platform to look for relationships between predictors and responses...and compare what I see with my process/domain knowledge. If a scatter plot proves that 'water runs uphill' (in other words is counter known laws of physics, chemistry, biology, socioeconomic behavior, etc.) then I start to get suspicious and suspend the modeling work until I get to the bottom of the issues.
Data cleaning and prep is never fun...and takes work...but it's absolutely necessary.