cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
JerryFish
Staff
Cringeworthy statistics statement #2 explained: Statistical significance vs practical importance

JerryFish_0-1637090512439.pngWe've all heard someone say something that isn't "right." Sometimes you just let it go, either because it isn't important enough to challenge, or you don't want to cause conflict with the speaker.

This is the second "cringeworthy statistics" blog series installment. In general, here's how it works: First, I post some kind of statistics-related question in a JMP Community Discussions thread. Community users are encouraged to reply. After about a week, I give my take on what is wrong with the statement in a blog post like this.

Here is a table of discussion topics and blog posts to date. Many thanks to those who reply to each discussion post (and/or add comments to the blog!)

Discussion Thread Topic Blog Post People who Responded to Discussion (or commented on Blog)
#1: Misinterpretation of p-value Dangers of misinterpreting p-values  @dale_lehman@P_Bartell@statman , @ih@Georg@brady_brady
#2: Too much emphasis on statistical significance This blog post @P_Bartell@statman,  Craige_Hales@ih@markschwab 
#3: Reporting Measurements Coming soon! Please post your comments in the Discussions space. Also feel free to contribute cringeworthy statistics statements of your own!

So let's take a look at the second cringeworthy statement I posted...

My take on Cringeworthy #2

Recall in our scenario that a study had been done to try to determine why yield had fallen in a production process. A t-test was performed to study differences between two suppliers' subassemblies. Indeed, the t-test showed a very low p-value, giving high confidence of a difference in the means of the parts from the two suppliers. 

The team manager wanted to jump at the fact that the p-value was small. He wanted to immediately start a study on why the two population means were different. (Perhaps he was driven by past experience that a difference between these vendors had once caused a problem.) But what the manager didn't mention was whether the difference was "practically important."

In my experience, people often confuse "statistical significance" with "practical importance." 

  • Statistical significance is simply a mathematical statement that there we can detect some amount of difference in the means of the two populations at some level of confidence. This is nice, but it says nothing about whether that difference is important to us. 
  • Practical importance is based on your judgement, experience and wisdom. Maybe the statistical test detects a difference of 0.1 units between the two treatments. Is a difference of 0.1 important to your application? If not, then perhaps devoting resources to discovering the source of this difference would be better spent looking for other sources that might lead to poor yield.

Maybe it is unfortunate that the term "significance" can be interpreted as "importance" in the English language. I don't know of a better word to use than significance, but it can be misleading to some people.

Moral of this story: Make sure to understand the difference between statistical significance and practical importance when interpreting statistical analytic results!

Now, please come on over to Cringeworthy Discussion #3!

Last Modified: Dec 19, 2023 3:36 PM
Comments