Our World Statistics Day conversations have been a great reminder of how much statistics can inform our lives. Do you have an example of how statistics has made a difference in your life? Share your story with the Community!
Choose Language Hide Translation Bar
JMP attends the PSI Conference in Edinburgh

I had the pleasure of interviewing Richard Zink, Principal Research Statistician Developer in the JMP Life Sciences division, prior to his visit to the UK to speak at the PSI (Statisticians in the Pharmaceutical Industry) Conference in Glasgow on 14 May. His PSI talk is titled "Assessing the Similarity of Subjects Within a Study Site" and will discuss sampling approaches and describe how the availability of extensive computerized logic and validation checks early in the clinical trial not only ensures data quality, but can be used to identify potentially fraudulent activities. Richard has been instrumental in the development of JMP Clinical, especially for Pharmaco-Vigilance (PV), clinical trial fraud, patient narratives and Bayesian methods.

International Conference on Harmonisation (ICH) guidelines suggest that clinical trial data should be actively monitored to ensure data quality. What do you see as the limitations of and issues surrounding on-site monitoring of clinical trials?

Traditionally, this is a very manual process where monitors compare case report forms (CRFs) pages against the physician records. Not only is this time-consuming, but traveling to numerous clinical sites can be extremely expensive. There are also limitations in how the data can be reviewed. When working with paper, it would be extremely difficult to examine trends in variable across time or compare the results of multiple subjects. It is also not possible to compare results across investigator sites.

What is your view of risk-based monitoring of clinical trials?

I think people have interpreted the ICH guidelines very literally and have gotten in the habit of performing 100 percent source data verification (SDV) for all CRF fields. People may spend a lot of time reviewing fields that have little chance of error, and at the end of the day, may have little impact on the findings of the clinical trial. In risk-based monitoring, we would take a random sample of available CRF pages and perform a thorough review of these sampled pages. Only if the number of errors exceeded a certain error rate would more CRFs be sampled. Of course, it would be important to sample from all relevant CRF domains. Further, the sampling fraction may be based on the importance of the data to the study. For example, given their importance, 100 percent of all data for the primary endpoint and serious adverse events (SAEs) may be reviewed.

Given the drive to reduce the cost of clinical trials, how can statistical sampling be used to achieve data quality whilst minimizing cost of analyzing trials?

Hopefully, the sampling will be performed in such a way to minimize the amount of on-site monitoring at the sites. This is beneficial in terms of travel costs, but also in terms of the number of person-hours spent manually reviewing the data. It is also extremely important to perform central monitoring of the data using a robust set of computational tools. This can include checks for outliers or implausible values, but may include more complex analyses to examine trends across time, identify missing data, or identify noteworthy differences between the investigator sites.

What are the emerging trends in discovering fraud in clinical trials, and how do you see tools evolving to meet the requirements for examining data for quality and fraud?

I think fraud has always been an important issue. Fraud, compared to other issues regarding data quality, is different in that there is a deliberate intent to deceive. Other issues regarding data quality may be due to poor training or carelessness. If you have identified unusual values that point to a data quality problem, going the extra step to say that this is necessarily due to fraud is difficult. At the end of the day, whether the problem is due to fraud or carelessness, data quality issues need to be identified early so that appropriate remedies can be applied to minimize disruption to the trial and maintain trial integrity. In the last 10 to 15 years, there have been some publications describing statistical methods used to identify fraud in clinical trials. With numerous competing priorities, implementing these methods in practice may be difficult. Data standards certainly help to ensure that any software developed can apply to different study teams, therapeutic areas or companies. Interactive graphical methods are useful to get as many team members involved, with the ability to drill-down to interesting cases. Of course, there are a lot of ways in which things can go wrong. Making these reviews efficient will be extremely important.

JMP Clinical covers so much more than just improving data quality and uncovering fraud. What are the key capabilities that you would highlight about the software?

I think the software has something for everybody, but what I find very satisfying about the software is its interactivity and the ability to review graphical results and statistical summaries side by side. In addition to the seven fraud detection tools, JMP Clinical has customizable patient profiles and adverse event narratives that allow for more straightforward clinical review and reporting. There is a snapshot comparison feature that allows the user to identify new or modified records as the study database is updated. There is a built-in notes feature that allows users to save and view notes at the analysis-, subject- or record-level. For the more analytically minded, we have a robust set of analyses for adverse events with adjustment using FDR and double FDR for incidence or time-to-event analyses, and a new feature that makes use of Bayesian hierarchical modeling. There is also an extensive set of predictive modeling tools and cross-validation features.

Article Labels

    There are no labels assigned to this post.