Throughout my involvement in risk-based monitoring, and particularly statistical monitoring, I have heard this line several times: “It’s just data cleaning, right?” It has been said in a way that has implied that it’s only data cleaning and, therefore, not the responsibility of the statistician. This attitude has surprised me. Since when was the quality and integrity of our data deemed unworthy of a statistician? And why should statistical monitoring not be worthy of a statistician undertaking it? While JMP Clinical uncovers anomalies that could be deemed to be data cleaning issues (e.g., missing information, unknown data, etc.), statistical monitoring encompasses so much more. It is not a comparison of treatments, rather it is an analysis that compares sites with each other, patients with each other, irrespective of treatment. Why do we want or need to undertake statistical monitoring? Because we need to preserve the quality and integrity of our data by ensuring that we are able to identify any occurrences of fraud or falsification of data, any calibration or training issues within/across sites, or any other issue that may affect quality or put the program at risk. Industry bodies have recommended that all future studies should take steps to identify any data quality concerns and fraud. I will show how JMP Clinical applies statistical algorithms to the clinical data sets to identify outliers or trends that could indicate a risk to the study. I will also highlight some of the challenges encountered along the way when initiating such a program within industry.