cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Check out the JMP® Marketplace featured Capability Explorer add-in
Choose Language Hide Translation Bar
Phil_Kay
Staff
Analytics with Confidence 1: Generalisability Is Key

Welcome to the first in a series of blog posts in which we will explore validation in modelling. Stay with me - I understand that this may not sound like the most exciting topic. But it is THE key method that makes modern data analytics possible. Without validation, sophisticated machine learning algorithms like support vector machines and neural networks are almost useless. And while it can be seen as a fairly “dumb” approach that massively simplifies the process of learning from data, it also needs to be applied with care, using your best understanding of the process or system that generated the data. Understanding validation is fundamental to your success in the modern age of data analytics.

In future posts, we will talk about the basics of Holdback and Cross Validation and how easy this is with the Make Validation Column tool in JMP Pro. We will see some case studies from science and engineering that illustrate the need to ensure Befitting Cross Validation (BCV) and Befitting Bootstrap Analysis (BBA). We will also cover cutting-edge approaches, including Self-Validated Ensemble Models (SVEM) for smaller data, such as designed experiments. Hopefully you will see why JMP is uniquely useful in enabling scientists and engineers to apply modern data analytics with confidence. First, we’ll talk about the need for generalisability.

Generalisability and the Replication Crisis
The topic for this series came out of discussions with Chris Gotwalt, JMP Chief Data Scientist, and Ron Kenett, Chairman of the KPA Group. We talked about generalisability, how it is a formal process in statistics but also part of human nature.

Ron: “We generalise all the time. From our experience in one restaurant, we infer what our experience will be in another restaurant in the same chain. Or we infer from one engineering project to another.”

Chris: “That’s what learning is. Most of this doesn’t happen in a JMP data table!”

Chris Gotwalt, JMP Chief Data ScientistChris Gotwalt, JMP Chief Data Scientist

Ron Kenett, Chairman of the KPA GroupRon Kenett, Chairman of the KPA Group

In academic research reproducibility has always been important: it is a cornerstone of science that it should be possible to replicate the findings of any study independently. It has been a particularly hot topic since the term “replication crisis” was first coined in the early 2010s with the widespread recognition of some serious problems in the fields of psychology and medicine. But generalisability is also at the forefront in certain industries. Generalisability is enhancing reproducibility.

Chris: “In a regulated industry like pharmaceuticals it has always been critical to know that a claim is generalisable and can be defended as a “truth.” It would be costly in all sorts of ways if your drug is noticeably less safe or effective in the general population than you promised based on the findings of your clinical trial. But JMP users are more often working in industrial problem solving and improvement.”

Ron: “We defined generalisability as one of the eight dimensions that affect the quality of information. Addressing it improves the quality of information derived from data analysis.”

Generalisability can be ignored, but there is still a high cost to getting it wrong. If the solution you concluded from your experiments does not work when you put it into production, then you will likely have wasted expensive material, manufacturing time, and all the effort in having to go back to solve the problem again in R&D. Conversely, if you get it right and you are confident about your solutions, you can avoid waste and accelerate the pace of innovation.

Generalisability in Modern Analytics
In the world of “small” data we use statistical measures like the p-value to understand the risk of making inferences that will not generalise. These tools were invented to enable us to use a small sample of data to generate reliable insights about the larger population or system that we actually care about. Unfortunately, p-values can be misinterpreted, misused, and even abused.

Ron: “Many people understand the p-value as the probability that a hypothesis holds, which is misleading. The way it is taught is mostly correct, but the way it is used is mostly incorrect.”

Despite these problems and being implicated in the causes of the replication crisis we referred to earlier, p-values have largely served scientists and engineers well. With larger data and newer modelling methods like artificial neural networks, we need to consider different tools to ensure generalisability.

I hope you will join me for the next post in this series, where we will illustrate this with an example from science and engineering. We will start by showing you how to generate findings that DO NOT generalise. Thanks to all the protections the developers have put in place, this is actually quite difficult in JMP. This will be useful for understanding how the tools in JMP guard you against these kind of problems and how validation makes it easy for you to apply analytics with confidence.

 

Learn more:

Last Modified: Mar 14, 2024 11:41 AM
Comments
ChulHee_JMP
Staff

Very much looking forward to learning from this series.

Seems to me to watch an interesting documentary film from TV!

dlehman1
Level V

Considering that this is a post about generalisability, I think the statement "p-values have largely served scientists and engineers well" is somewhat misleading.  p-values (arguably) may be useful in thinking about generalisability from a sample to the population it was drawn from.  But many, if not most, applications involve generalizing from a sample to a somewhat different population - a different time period, different location, different people, etc.  p-values don't really serve anyone well in those situations without the assumption that the population you are making inferences about is essentially the same as the population the sample was drawn from.  That assumption may be reasonable for engineering contexts, but less so for the social sciences.  Too frequently, the appropriateness of the sample to the population for which inferences are being made, is glossed over by relying on the p-value to somehow take care of it.